Humans, not bots, are to blame for spreading false news on Twitter

The largest ever study into misinformation on Twitter has found that false news spreads further and faster than the truth, and humans are to blame
iStock/RDC_design/WIRED

Bots have received a lot of flak for their role in spreading false news on Twitter, but it turns out that these criticisms might not be entirely justified. Humans, not bots, are to blame for the ease with which false news has managed to spread on Twitter, according to a study published in the academic journal Science.

“False news spreads further, faster and deeper than the truth in every category of news,” says Sinan Aral at Massachusetts Institute of Technology, who supervised the study. Aral and his co-authors Soroush Vosoughi and Deb Roy at MIT Media Lab concluded that false news stories were 70 per cent more likely to be retweeted than true ones, even though accounts tweeting false news tended to have fewer followers and be less active.

The study is the largest look at the spread of false information on Twitter so far, covering some three million accounts and 4.5 million tweets between 2006 and 2017. Around 15 per cent of the accounts in Aral’s datasets were bots, but when he took them out of the mix, he found that false news was still more likely to be shared than the truth.

“We’ve got a lot of media reports and testimony in front of both houses of Congress talking about how important bots are in the spread of false news,” Aral says, but humans are more often the ones sharing and spreading the lies. Although bots do contribute to the spread of false news, they also have exactly the same impact on truthful news. Humans, on the other hand, seem to have a predilection for sharing falsehoods rather than facts.

Read more: You don’t need to be rich (or Russian) to subvert UK democracy

Aral thinks that relative success of false news stories on Twitter might have something to do with people’s desire to say, and share, things that they find surprising or different. “When people share novel information their status goes up,” says Aral. And false news, it turns out, tends to be a lot more novel than the truth.

To sort out false news from the truth, Aral started with around 3,000 news stories that had been classified as fact or fiction by six fact-checking organisations, including Snopes, PolitiFact and FactCheck. He then looked at Twitter accounts that had mentioned or shared these stories, and compared how often and widely false stories were shared in comparison with truthful news.

While individual tweets about truthful stories only rarely reached an audience of more than 1,000 people, the top one per cent of false news stories routinely reached between 1,000 and 100,000 people. Truthful tweets also spread much more slowly than false ones, taking on average six times as long to reach an audience of 1,500 people.

Of all the different types of news Aral and his colleagues studied, they found that political news was more likely to go viral than any other kind. Conspiracy theories that Barack Obama wasn’t born in the United States and that Hillary Clinton was seriously ill during the 2016 presidential election were particularly prominent in the dataset, with the number of false tweets spiking during election years.

While this is the largest study of its kind on Twitter, it’s much harder to interrogate Facebook data to work out if false news spreads the same way on that platform. And despite the evidence of the success of false tweets on Twitter, it’s even harder to know what impact false tweets have had on elections, or on politics more generally. “I have not seen conclusive evidence that social media is causing political polarisation,” Aral says. “But I also don’t believe that this is a nothing issue. I do believe that this is a very serious problem.”

This article was originally published by WIRED UK