Twitter and Facebook have become 'vessels of propaganda and manipulation'

A study from the Oxford Internet Institute warns that social networks have to do more to stymie the tide of fake news, which damages our democracies
marrio31 / iStock

Whether you live in a democracy or under an authoritarian regime, the social media content you are exposed to is being funnelled through a filter of propaganda that is expertly crafted to manipulate your opinions.

Read more: Here's how much it costs to buy 'fake news' online

This is not a totally new revelation. Fake news has been at the forefront of our cultural jargon for a while now. In the US we saw political commentators and politicians fearful that the sheer velocity of false stories circulating online heavily impacted the outcome of the 2016 presidential election. The apparent beneficiary of that impact, president Donald Trump, is now a great fan of blaming all and any negative coverage of the phenomenon. The lines are so blurred, it’s hard to even see a roadmap back towards transparency and oversight.

Two researchers from the University of Oxford, however, are attempting to break ground on the issue, with the publication of a new study looking at Computational Propaganda - a combination of human curation and algorithm-driven automation to plan and deliver the mass distribution of false information on social media. And the scale of the problem is extraordinary.

Subscribe to WIRED

“The report is basically showing the militarisation of the internet in many ways,” Carl Miller, research director of the Centre for the Analysis of Social Media (CASM) at Demos told WIRED. “It’s finishing the narrative from the late 90s and noughties where we saw the growth of forums that went under the radar for quite a long time for mobilising political discussion. After the Arab Spring, it became clear how important social media would be for the politics of the future. Now a lot of states, as well as private sector actors, have moved in and worked out how to manipulate stuff online.”

Just a few top line figures help bring transparency to the issue and quantify the problem – they also show how steep the road back to a transparent democracy is in some parts of the world. In Russia, for instance, 45 per cent of Twitter activity is managed by “highly automated accounts”. Tactics from Russia, which were used to overpower the voice of the opposition, we now see mirrored in the US elections where candidate support became artificially exaggerated.

In recent years Twitter has gone to great pains to publicise its spam-fighting technology. In its 2017 Transparency Report the social network revealed that 74 per cent of the 376,890 accounts suspended for posting terrorist content were detected by those tools. How it’s possible, then, that such a vast amount of Twitter traffic in Russia is pushed through automated accounts is unclear.

“Bots are unbelievably easy to make, I made one in a few hours the other day,” continues Miller. “But not all bots are bad, it’s important to say. Some bots are going to be an important part of the way we exist in the future. There is a booming industry and there are all kinds of ways bots are going to help us.”

How they are being used for nefarious actions, however, is getting more sophisticated. The report refers to sleeper bots that emerge to send out a few tweets, possibly a tactic to evade Twitter’s detection systems. “In Prague a month ago when at a Nato summit on disinformation online, quite a lot of activists on the front line, in Russia, Central Eastern Europe and the Baltics, used accounts that weren't purely bots but powered by humans.”

The researchers looked at content manipulation in Brazil, Canada, China, Germany, Poland, Taiwan, Russia, Ukraine, and the United States, tracking bot activity and how different tools and tactics were deployed depending on the desired outcome. In total, they analysed tens of millions of posts, and interviewed 65 experts, focusing in on massive events from elections to national security incidents. They found, unsurprisingly, that social media is the predominant force impacting the political opinions of young people, referring to Facebook as being a “monopoly platform for public life”. They found that authoritarian regimes use it for social control, and in democracies, it is used for “computational propaganda either through broad efforts at opinion manipulation or targeted experiments on particular segments of the public”. So, again, social control. The results and tactics may appear different, but the results are similar.

The paper also looks at how countries have tried to counter the impact. In the Ukraine, where citizens have been subjected to some of the “most globally advanced” tactics, responses have largely been from private organisations and there has been no unified government crackdown. Germany is the only country, the researchers believe, to have been proactive enough, even going so far as to consider implementing new laws that would see social networks fined for not complying – a measure that the UK is threatening to adopt to deal with the proliferation of hate speech online. But this could come with its own problems.

“Governments want to task tech giants more and more to police online spaces, but we don’t want tech giants to become even more powerful than they are,” says Miller. “On the other hand we are worried about what is fake news and what is real news, but we haven’t tolerated that amount of government intervention before. The government stays out of that.”

“Twitter has a direct interest in protecting the user experience and that definitely means shutting down gobby bots.” For Twitter, though, Miller points out that any acknowledgement of the scale of the problem can be damaging. “When they released an estimate that they thought 10 per cent of Twitter users were bots, its share price toppled. We have got to do something about that market disincentive. They’ve got to find a way of being transparent without the share price collapsing.”

The coauthors of the paper, Samuel Woolley and Philip Howard of the Oxford Internet Institute, have warned that whatever the means, social networks must do more. Warning that there are a series of important global events upcoming, including elections in Germany, Egypt, Brazil and Mexico, the pair emphasise how important it will be to allow the promotion of legitimate news from reputable sources.

“Ultimately, designing for democracy, in systematic ways, will help restore trust in social media systems. Computational propaganda is now one of the most powerful tools against democracy. Social media firms may not be creating this nasty content, but they are the platform for it. They need to significantly redesign themselves if democracy is going to survive social media.”

WIRED has contacted Twitter and Facebook for comment on the report and will update this story when we hear back.

This article was originally published by WIRED UK