Artificial Intelligence (AI) and fake news seem to be inescapably linked together. On one hand, critics of the newest technologies claim that AI and automation processes have been instrumental in unleashing an apocalypse of blatantly false stories upon the helpless public. On the other hand, some of the best scientific minds on the planet, in their relentless quest for truth, are already developing new AI-powered solutions that can detect deceitful stories.
Will they be up to the challenge?
Years after the exhausting fake news battles fueled by politics both at the national and international level, a new wave of massive manipulation of the actual truths and facts has been seen during the 2020 and 2021 pandemics. Many of the biggest players of the online world such as Facebook and Google are on the frontline of the battle against disinformation campaigns. They already announced some time ago they were going to implementing potent machine learning software to discard misleading material on their platforms.
One of the basic reasons why fake news quickly turned into an epidemic is that it's presented in a way that is more appealing or engaging to readers/viewers. Some AI is built on this assumption, and their machine learning algorithms have already been trained for years by fighting against spam and phishing email.
This method was tested back in 2017 by a collective of experts known as the Fake News Challenge, who have volunteered in the crusade against fake news. Their AI operated through stance detection, an estimation of the relative perspective (or stance) of the body text of an article compared to the headline. Thanks to its text-analysis capabilities, AI can evaluate the likelihood that the message was written by a real human rather than a spambot by comparing the actual content with the headline. But that was only the beginning.
On May 18 2021, Google explained how the application of natural language processing and search models could pave the way for intelligent algorithms that can really understand what people say. Their newest AI technologies include LaMDA (language model for dialogue applications) and MUM (Multitask Unified Model), an AI model that can understand complex human questions by cross-referencing text with images. These same advanced technologies used to understand and analyze linguistic cues, could also be used to discriminate between truth and fiction and detect the most complex human linguistic patterns being used to write lies and hoaxes.
Another method includes an automated quick comparison of all similar news posted on multiple media, to check how much the facts portrayed are different. The credibility itself of the news outlet could be assessed by analyzing parameters such as the reliability of the sources used, the writing and correction policies, and ethics standards.
Ideally, if a specific website is spreading fake news, it could be flagged as an unreliable source by groups who are in charge of checking the integrity of news sites such as The Trust Project, and excluded from the news feeds. Google News is probably going to use this method, since it was announced that it will draw content from some yet-to-be-defined "trusted news sources." This way, people will be pushed away from extreme content – like what happened on YouTube with flat-Earthers – and directed toward properly defined "authoritative sources."
JournalismAI is another global project fronted by the London School of Economics that aims at improving the dialogue between journalists and news organizations to leverage the full potential of AI. AI in journalism could be used to mitigate the inequalities suffered by the journalist communities in disadvantaged areas and improve the overall quality of information in less served communities.
Lastly, other simpler algorithms could be used to analyze a text and scour for blatant grammar, punctuation and spelling errors, spot phony or fabricated pictures and cross-check the deconstructed semantic components of an article against reputable sources.