Last week, Wimbledon announced adopting the Threat Matrix system to make online forums safer for its athletes. The AI system, developed by Signify Group, is adept at recognizing toxicity and restraining online trolls.
Is AI capable of solving the issue of athletes’ online abuse for good? We explored the matter and delved into other prominent examples of using AI at major sports events, including the FIFA World Cup and the Paris 2024 Olympics.
Key Takeaways
- The AI system Threat Matrix moderates toxic content to protect Wimbledon players from online abuse.
- Other major tennis organizations have adopted the technology to address significant levels of trolling.
- The AI-driven approach to addressing online abuse is expanding. It now includes upcoming events like the Paris 2024 Olympics.
Hate Speech & Online Violence Against Athletes: Something Has to Be Done
Everyone remembers the racist comments that Marcus Rashford and Bukayo Saka had to endure after England’s loss in the Euros 2020 final. Far from being an isolated incident, hate speech affects many sports professionals.
Tom Curry was subjected to abuse after England’s semi-final match against South Africa in the 2023 Rugby World Cup, and World Athletics recently published a study revealing instances of online violence perpetrated against athletes competing at the World Athletics Championships Budapest 23.
Missing, losing, underperforming—no one denies these things can be profoundly frustrating, but we can surely agree that abuse is indefensible.
Something has to be done, and it will take more than a reproachful tweet from Gary Lineker.
Booing and racially abusing the fine young men that play for our country and have given us so much pleasure and joy over the last month is not being an @england fan. That goes for the pathetic fighting at the ground too. It’s a minority but it’s a loud one and it’s embarrassing.
— Gary Lineker (@GaryLineker) July 12, 2021
How Is AI Used in Sports to Protect Players
The first time a major tennis tournament teamed up with a technology solution company to help tackle online abuse was during the 2023 French Open. Bodyguard.ai protected players and staff by crawling, flagging, and moderating millions of hateful messages.
Following this, the International Tennis Federation (ITF), Women’s Tennis Association (WTA), All England Lawn Tennis & Croquet Club (AELTC), and U.S. Tennis Association (USTA) collectively subscribed to Signify’s monitoring service, which was implemented at the beginning of 2024.
The WTA said the decision was made in response “to significant levels of social media abuse and other inappropriate online contact, which poses risks to preparation, performance and mental health.”
Threat Matrix recognizes 35 languages and monitors players’ public-facing social media for abusive and threatening content. It then produces reports and recommendations that are passed on to the relevant human authorities. This service is unique because it also allows players to have private messages scanned for abuse.
Jamie Baker, the tournament’s director, announced the use of the tech last week and said it would also be rolled out at the US Open.
As reported by The Daily Mail, Baker said, “We are not stepping in and becoming the police, but it’s important to try to help [players].”
An Abuse Epidemic in Tennis
One study indicates that Naomi Osaka was the most abused tennis player on Twitter in 2021. She received a staggering 32,415 mentions that were deemed harmful. The same study revealed that Serena Williams received 18,118 abusive Tweets, and Novak Djokovic reached 15,793.
This level of abuse instills fear in players, who expect to receive hate regardless of how well they perform.
After beating the British No. 1, Katie Boulter, last week, Harriet Dart expressed her concerns about social media: “I’m sure today, if I open one of my apps, regardless if I won, I’d have a lot of hate as well,” the Guardian reported.
While poor and good performances have been shown to cause high levels of anxiety, which spark online abuse, the fact that some fans bet on a player’s performance also contributes to the problem.
Dan Venn, a National Tennis Coach, told Techopedia that abuse of players is often fuelled by gambling:
“I have seen vulgar comments and hateful private messages from faceless accounts taking out their frustration on players because they had money riding on the outcome of the match.”
Research from Loughborough University has not only confirmed the prevalence of abuse towards athletes but the enduring detrimental effects on mental, emotional, and physical health.
Venn also emphasized the emotional toll and the need for “better regulation to protect players at all professional levels.”
This arguably highlights the need for systems like Threat Matrix, which, as the WTA promises, “will support the identification of abusers, against whom all available measures will be taken.”
It’s Not Just About Tennis
Threat Matrix is effectively being implemented across the sporting landscape. The referee of the 2023 Rugby World Cup, Wayne Barnes, received threats to his life and family from 22-year-old Aaron Isaia. Signify exposed Isaia, which led to his successful prosecution.
Jake Marsh, head of Signify Sport, described the outcome as “a great example of what actions, outcomes, and deterrents are possible in protecting participants from online abuse and threats in sports.”
An independent report, which used Threat Matrix to track over 400,000 social media posts during UEFA EURO 2020 and CAF African Cup of Nations 2021, found that 50% of players received discriminatory abuse.
Following this, Threat Matrix was employed during the FIFA Women’s World Cup Australia and New Zealand 2023. The results of this initiative were published in the FIFA Social Media Protection Service Report, which showed that out of 5.1 million analyzed posts, 103,000 were flagged.
1 in 5 players received targeted discrimination, and 50% of detected messages were homophobic, sexual, or sexist. The report also found that players in the FIFA Women’s World Cup 2023 were 29% more likely to be targeted with online abuse than those at the FIFA World Cup Qatar 2022.
When discussing the results, Georgia Relf, Signify sports account manager, highlights the tragic correlation between increased interest in women’s sports and increased levels of abuse:
“Women’s sport is thriving. With stadium sell-outs, attendance records repeatedly being broken, and 46.7 million fans watching women’s sport in 2023 – the visibility of women’s sport has never been higher. However, increased visibility can lead to an increased susceptibility to online abuse.”
The Olympics
The Paris 2024 Olympics have also joined forces with Signify. Speaking at the Olympic AI Agenda launch, where Threat Matrix was presented, Olympic champion Alpine skier Lindsey Vonn said:
“I wish that had been a technology that was available to me because I think it would have saved me a lot of anxiety and emotional trauma.”
After discussing how she received online death threats during her career, Vonn concluded by saying: “If there’s a way to minimize that kind of hate speech, I think that’s wildly beneficial to the athletes.”
Kirsty Burrows, head of the Safe Sport Unit at the International Olympic Committee, described how “Safe sports environments also have to mean safe digital environments.”
Burrows said:
“Sports and social media are inextricably linked, and we see so much opportunity for engagement there, so for Paris, for example, we’re anticipating half a billion social media posts.”
However, given the likelihood of mass negative engagement, Burrows explained that using AI will be integral to fostering healthy online environments at the Olympic games.
The Bottom Line
Online violence in this digital age is pervasive, but AI in sports is helping to protect athletes.
With providers like Signify and revolutionary systems like Threat Matrix, sporting professionals will no longer have to avoid social media to circumvent online hate.
That sounds like game, set, match.
FAQs
How does AI help with sports?
What are the misuses of AI in sports?
How is AI used in the Olympics?
How is AI used in sports medicine?
References
- Fifa study of Euros & Afcon finds half of all players abused online; Saka & Rashford most targeted – BBC Sport (Bbc.co)
- Tom Curry targeted with ‘disgusting’ abuse in South Africa race row (Telegraph.co)
- ONLINE ABUSE IN ATHLETICS AI Research Study: World Athletics Championships Budapest 23 (Assets.aws.worldathletics)
- Gary Lineker on X (X)
- Gary Lineker on X (X)
- Bodyguard.ai | AI Social Media Monitoring and Moderation Solution (Bodyguard)
- Players’ Social Media Accounts Protected From Online Abuse at Roland Garros (Tennis-infinity)
- Tennis governing bodies launch service to combat online abuse (Wtatennis)
- Wimbledon turns to AI to stop the trolls and potential stalkers as female stars such as Emma Raducanu and Harriet Dart face increasing online abuse (Dailymail.co)
- #MeToo, Sport, and Women: Foul, Own Goal, or Touchdown? Online Abuse of Women in Sport as a Contemporary Issue | SpringerLink (Link.springer)
- Wimbledon employs AI to protect players from online abuse | Wimbledon 2024 | The Guardian (Theguardian)
- Applying cognitive analytic theory to understand the abuse of athletes on Twitter (Tandfonline)
- Home – National Tennis Association (Nationaltennis.org)
- Online abuse aimed at elite athletes on the rise – new study | News and events | Loughborough University (Lboro.ac)
- Death threats, AI, prosecutions: how should we stop referee abuse? | Sport | The Guardian (Theguardian)
- Jake Marsh on LinkedIn: Brisbane troll fined for ‘vile’ online ref abuse during Rugby World Cup (Linkedin)
- Inside FIFA (Inside.fifa)
- Fifa Women’s World Cup Australia & New Zealand 2023TM | Social Media Protection Service / Full Tournament Analysis (Digitalhub.fifa)
- Protecting Women’s Sport from Online Abuse – Intl Women’s Day 2024 — Signify – Better Data (Signify)
- Olympic AI Agenda – YouTube (Youtube)