Mumbai: As more than 15,000 athletes went about their business in Paris across the 2024 Olympics and Paralympics, social media was abuzz through July and August in a sea of opinions, cheer and criticism. All through, an algorithm powered by artificial intelligence (AI) monitored something that is becoming an increasingly talked-about issue for elite sportspersons — online abuse.
On Thursday, World Athletics published the findings from the study that tracked 1,917 athletes with at least one active social media account on four platforms (X, Instagram, Facebook, TikTok) during the Paris Olympics. From the total 3,55,873 posts and comments that were analysed for abusive content, the AI algorithm flagged 34,040 posts. After a human review of those, 809 were verified as being abusive in nature.
That’s just from one sport in one Games (excluding the Paralympics). Scale that up to all the sports and events and participants in Paris this year and you’d get a sense of the volume of online abuse that creeps into social media around the Games. And then multiply that to elite athletes being subjected to it all year round.
Kirsty Burrows, the International Olympic Committee’s (IOC) Head of the Safe Sport Unit that brought in the AI-powered monitoring service for these Games, said they anticipated around half a billion social media posts during the Games.
“That’s just posts, not even including comments,” Burrows told HT. “The industry average for online violence is around 4%. That would mean 20 million of those could potentially be something which is abusive, either breaching the community guidelines or potentially criminal in nature.”
And more athletes now are starting to speak up about it. In August after losing her first-round match at the US Open, top French tennis player Caroline Garcia, who also exited from the opening round of the Paris Olympics, had posted images of four abusive messages she had received among “hundreds” that threatened her family and labelled her a “clown”.
“Maybe you can think that it doesn’t hurt us,” Garcia wrote. “But it does. We are humans.”
Her post elicited reactions from a number of other top tennis stars, with the then world No.1 Iga Swiatek writing, “Thank you for this voice”.
Football star Jude Bellingham has frequently raised the issue of abuse in the reel and real space. In 2021 when the then teenaged Englishman played for Borussia Dortmund, he shared a screenshot of his Instagram that featured abusive comments about his mother and included monkey emojis. “Just another day on social media…” he wrote then.
A previous study, conducted by the same Signify Group that was roped in by the IOC for the Paris Games with its AI-powered Threat Matrix, examined posts across two big football events — the Euro final between England and Italy in 2021 and the AFCON final between Senegal and Egypt in 2022. The study found 55% of players competing in both these finals received “some form of discriminatory abuse”. Homophobic and racist comments were the largest forms of abuse, with Black players who missed penalties for England (they lost the final 3-2 on penalties) being heavily targeted. In the World Athletics findings for the Paris Games, 18% of the detected abuse was racist in nature.
World Athletics had published a similar study around the Tokyo Games in 2021, but the sample size of athletes in Paris was 12 times more. It was part of the larger net that the IOC too, recognising the menace and impact online abuse can have on the Olympians, spread across the Paris Games compared to Tokyo.
“Previously it’s been used to cover around 800 to 2,000 number of people,” Burrows said, the number shooting up to around 17,000 in Paris that included athletes, coaches and officials.
How AI flags abuse
AI has a huge role to play in it. Tracking those around half a billion social media posts around the Games, the AI software can detect abusive content across 35 different languages. It will flag those posts that appear to be violent or abusive in nature, applying a threat algorithm. Those posts are then passed on to a human review, after which, should the posts be abusive, necessary action is taken.
“Effectively the service provider has an expedited channel to the (social media) platforms, and we also have great relationships with the platforms for the removal of any flagged abusive or potentially criminal content,” Burrows said. “And then we move to the ground safeguarding, in supporting the people who are being targeted. Ideally, the process is so fast that usually the athlete won’t have the chance to see the abuse. That’s the aim, but, of course, you can’t always guarantee that.”
The larger aim, in the IOC and other international sporting bodies tapping into AI to detect and weed out online abuse subjected to athletes, is for athletes to feel safer in their world of social media.
“Many athletes are committed to growing the sport of athletics through their online presence, but they need to be able to do so in a safe environment,” World Athletics president Sebastian Coe said.