In the last few years, the phrase "fake news" has acquired a new meaning as it has merged together all forms of misinformation coming from government conspiracies, public propaganda, teenage internet pranks and misleading ads. In other words, although we live in a world where all kinds of information is almost instantly accessible, the line between truth and lies has never been murkier.
History is full of these "junk stories," some of them as ancient as ancient Egypt itself. Did you know that in the 13th century BC, the pharaoh Rameses the Great falsely portrayed the Battle of Kadesh as a stunning victory for his army while it actually ended up in a stalemate against the Hittites? If your answer is (almost certainly) "no," well, I didn't either. I simply read it on Wikipedia after a quick search of no more than a few seconds – so I just hope that's not a fake story as well.
Today, we've got an issue here since new junk news gets published day after day thanks to a bunch of rather evil technologies that have been manufactured to mislead people, sometimes even into voting for some unscrupulous politician. But, hey, no need to be scared. The good news (pardon the pun) is that other technologies are being devised to tackle fake news and put it back in the place it really belongs – the trash bin. (Some think that the next iteration of the World Wide Web will help curb fake news. Learn more in Straight From the Tech Experts: What Will the Defining Feature of Web 3.0 Be?)
Rage Against the Machine (Learning)
One of the most common ways to spread fake news is through the use of large numbers of bots. The potential for automation is enormous when it comes to distributing or enhancing the visibility on social media. Bots can be programmed to share, comment or like a post countless times, increasing the impressions of content, and reaching large numbers of people by exploiting the same rules upon which most social media platforms are built.
Stopping bots in their tracks is, therefore, one of the most effective ways to curb the epidemic, but the question remains the same – how can you recognize a bot and tell the difference between humans and machines? For other humans it's pretty simple, but no IT department could ever reach the scalability needed to catch up with an army of bots. Facebook apparently developed its own anti-bot technology, but never explained the full details about its functionality. During a past interview with The New York Times, Mark Zuckerberg simply revealed that they "deployed some new AI tools to identify fake accounts and false news" coming from the Macedonian fake-news-for-profit businesses.
Although there's still some speculation around the actual methods used to identify bots, it seems that most of these spammy accounts can be identified as artificial since they all share some similarities in presentation and timing. Anti-bot software can recognize these patterns by using data analysis, and then flag them for further investigation. Biometric authentication is also being used to determine which accounts are nothing but bots on Facebook and Twitter, and stop the problem at its source.
Tell Me Lies, Tell Me Sweet Little Lies
Another approach to root out this problem is to deal with the fake news itself – which apparently is even more complicated than just killing some bots. Despite various attempts, so far AI has failed on a number of levels because it just can't understand human writing the way humans do. In theory, the machine is taught to detect the article's "stance" such as the tone, sentiment and style used while writing the story, and determine whether the information may be inaccurate or blatantly false. The AI will perform a deep analysis of the content, URL, and headline structure as well as the website that published the news, its web traffic, Twitter account and overall engagement on social media. However, the results weren't particularly encouraging, with a success level of roughly 65 percent. Humans are still needed to help refine the outcomes and validate the findings, so the whole system is still in its infancy. Some things such as subtleties of tone, the cultural context, or even just some good old humor are simply beyond the machines' ability to comprehend.
Videos, however, are on a completely different level. The most cunning machine-learning technologies such as the generative adversarial network (GAN) can create astoundingly realistic videos of people such as Barack Obama saying a lot of stuff he never said. With the right technologies, it has never been so easy as it is now to create a fake image. As Hany Farid, professor of computer science at the University of California who fights against these "deep fakes" explained, "the technology is democratizing." But, at least this time, other machines can detect these apparently invisible modifications applied to the human face and detect the fake videos. In fact, these AI can scour for details that the human eye may never catch, such as minuscule light changes, lack of blinking, or color shifts in the face that correspond with the heartbeat. However, the scientists working on these projects need to keep their best strategies secret for obvious reasons.
Blocking the Chain of Lies with Blockchain
Blockchain keeps providing solutions to almost every modern problem. Among many other things (that may include bitcoin), this incredibly versatile technology can also be used to staunch the gaping wound that fake news has inflicted on our society. Blockchain can, in fact, provide that much-needed transparency and accountability that the world of news needs so much. First, it can help track the origin of every article and piece of content, as well as its subsequent steps, such as who shared it and where.
Second, it can help revolutionize the current monetization system that is based more on generating views than on delivering quality content. Those who make more money are those who are able to share their content in more places – which is much easier when an article is packed with alarmist claims, filled with conspiracy theories and enhanced with scaremongering tactics. The blockchain could help media platforms create their own self-sustaining economies where cryptocoins could be used to motivate authors to generate reliable and fact-checked information. Internal data encryption, identity verification and full traceability of any piece to check for any author bias are just the icing on the cake here. (For more on blockchain, see How Blockchain Is Changing the Way You and I Do Business.)
Don't Feed the Troll
Seriously, don't. He's going to grow huge if you do. The bottom line is, although machines and technologies can help us curb the emerging number of false stories we're exposed to, it's ultimately our responsibility to carefully read through everything we put our eyes on. After all, people have been telling other people lies since the dawn of civilization, but (as far as I know) the ancient Egyptians didn't have an AI back then to help them separate the truth from the lies.
We all have the mental faculties to spend a few seconds more than one second and check the sources of what we read. With or without technologies to assist us, the next time you end up believing something that is an outright lie, do not forget that this is your fault as well.