Technology as a weapon in the battle against fake news
Algorithms can help us identify false stories, but letting technology be the judge of what is true and false is not without its problems, says Leon Derczynski, a researcher at ITU specializing in automatic detection of fake news and misinformation.
"We have had fake news for about as long as we have had language, and there are all kinds of social reasons why it emerges. These could be for attention, or a desire to sympathize with someone who has experienced something shocking. And then there are the more malicious reasons, like wanting to manipulate how people think,” says Leon Derczynski, who has researched technology for veracity on the web for the past five years.
In the big picture, what has changed is that today, anyone can start their own media outlet and spread false stories over the internet.
»"It has become easy to spread false or unconfirmed claims in a way that appears authoritative. Today it is not always easy to tell fake and official news sources apart, and that is a very different situation from just twenty years ago, when few people were able to create a website,” he says.
Today it is not always easy to tell fake and official news sources apart, and that is a very different situation from just twenty years ago [...]
Leon Derczynski, Assistant Professor at ITU «
From niche research to sweet spot
Leon Derczynski began his research on automatic detection of fake news in 2014 in connection with his postdoc at the University of Sheffield. His research was part of an EU-supported project which aimed to understand and create tools for tracking rumors and fake news online.
“As with so many other research projects, it seemed relatively speculative and specialised when we started. But when the discussion on fake news really took off in connection with the US presidential election and the Brexit vote in 2016, as a point where we had developed some relevant new technology, we suddenly found ourselves in a sweet spot,” he says.
In the project, the researchers looked at - among other things - controversies and rumors.
"Here we are talking about stories where we don’t know the truth - for example, that aluminum causes Alzheimer's. No one knew for sure whether that was true. We identified these types of debates and used Natural Language Processing and Machine Learning to pick up signals about whether a story is true or unsubstantiated,” says Leon Derczynski.
Debates reveal truth value
The researchers in Sheffield focused not much on the claims themselves, but instead on the reactions they prompted on social media.
"The idea is that you have a basic claim, for example, ‘Hillary Clinton is possessed by a demon’. This claim will be met by reactions, like a tweet that says ‘Hillary Clinton's demon is taking over her face – you can see it in this photo.’ The tweet is a source that supports the claim and creates discussion. Maybe people support the idea, maybe they question or deny it. The balance of how this discussion changes over time actually gives a good signal of how true the original claim is,” he explains.
The BBC called the method a ‘lie detector for the web’, but this is misleading as a lie is something completely different, Leon Derczynski emphasizes.
»"People use different styles when they are lying. And usually people who spread fake news don’t believe that they are lying. If you really believe that vaccines cause big noses, for example, you can say that and no lie detector will get you,” he says.
[U]sually people who spread fake news don’t believe that they are lying. If you really believe that vaccines cause big noses, for example, you can say that and no lie detector will get you.
There are also a number of other methods for automatic tracking of fake news. For instance, algorithms can compare information from a text with verified sources.
"If a text claims that 9 million people live in Aarhus, an algorithm can look up this claim, for example on Wikipedia, and find out that it is false,” Leon Derczynski says.
However, this method does not help when a claim is un-falsifiable. This applies, for example, to the claim that Hillary Clinton is possessed by a demon. However absurd, it is difficult to disprove.
Technology can limit misinformation
Leon Derczynski believes that technology can become a useful weapon against fake news and misinformation. For instance it can aid journalists in checking their sources, and thereby help limit the spread of false information.
The technology also has potential in crisis situations.
According to Leon Derczynski, automatic detection of misinformation has potential especially for journalists and in crisis situations.
"In connection with crises or emergencies, such as shootings or riots, we always see a lot of false information on social media. This means there is a lot of noise disturbing those trying to help. Here, the technology can filter rumors and false information, so that organizations better prioritize aid,” he says.
Fake news is hard to kill
Despite progress in the automatic detection of misinformation on the web, the work is far from done, says Leon Derczynski.
Among other things, methods must become more accurate, and data is lacking for other languages than English.
And even if the researchers solve these challenges, we are still left with a basic dilemma:
»"We know that even if we tell people that a story is false, if they like the story, they will continue to believe it. Typically because it fits in with their preconceptions. So what should we do? Should a platform like Facebook start censoring all information that looks fake?, "he asks rhetorically.
We know that even if we tell people that a story is false, if they like the story, they will continue to believe it. Typically because it fits in with their preconceptions.
“It may not be particularly desirable to have private companies become arbiters of truth. That is the underlying issue, and it’s very hard to solve,” he says.
Leon Derczynski, Assistant Professor, email email@example.com
Vibeke Arildsen, Press Officer, phone 2555 0447, email firstname.lastname@example.org