The results are in. This week Facebook announced it has completed its second investigation into how Russian disinformation may have influenced British democracy and the Brexit vote. As with the company's first investigation it found no evidence that Russian propagandists had purchased advertising to target British Facebook users.
But the company, and the parliamentary inquiry into Russian meddling, is missing the point. The real abuse of Facebook in the UK – and around the world – isn’t being perpetrated by shadowy Russian advertisers, it’s happening in broad daylight with the full knowledge of the platform.
Advertising is only one strand of propaganda. While it may have discovered Russian-purchased ads around the US presidential election, real, effective, disinformation campaigns are based on believable characters. And this isn't limited to the Russian Internet Research Agency.
"This is the wrong question to be asking," says Yin Yin Lu a researcher at the Oxford Internet Institute who specialises in online propaganda. "Time-wise, it's infinitely easier for them to look at advertising than to look at real accounts. The focus has been too much on advertising."
Real accounts – whether on Twitter, Facebook, or YouTube – can post organic content to news feeds and profiles that spreads without anyone having to spend a penny. These aren't bots, but accounts that have been crafted to appear as genuine users, which have an agenda to share. One account claiming victims of the Florida shooting were "crisis actors" has had Facebook posts shared more than 130,000 times.
Deliberately sharing controversial viewpoints isn't necessarily problematic and often doesn't fall foul of community guidelines. It can prompt debate and encourage freedom of speech. But, with social media companies effectively acting as publishers, these views can widely be shared and shift opinions.
This is problematic when posts fall into the realm of propaganda – whether they're from China's content factory or Breitbart.
This poses a problem for researchers trying to scrutinise how viral social media posts spread and influence opinions. It's particularly difficult for analysing Facebook. "We have a lot of transparency around what goes on on Twitter as a public platform, we don’t have that same information for Facebook," says Alex Krasodomski-Jones from Demos, a think tank that specialises in analysis of social media.
"It's impossible," adds Lu. "The [Facebook] API is strictly public accounts, public groups, public pages. The whole value of Facebook is not the public information. We're all hindered by what the social media companies are telling us."
You don’t need to uncover a Russian bot operation to know that online platforms can easily spread extremist views and radicalise individuals. Darren Osborne, the man who deliberately drove a van into a crowd of Muslims outside a London mosque in June 2017, killing one man and injuring nine others, had researched far-right groups online, receiving a Twitter direct message from Britain First’s deputy leader Jayda Fransen in the weeks before his attack.
Earlier this week, the outgoing assistant commissioner of the Metropolitan Police, Mark Rowley, said that he had “no doubt” that exposure to far-right material posted online helped persuade Osborne to target Muslims. His radicalisation seems to have been sparked at least in-part by a BBC documentary on grooming gangs in Rochdale that Osborne watched in the weeks before the attack.
As far as we know, nothing that Osborne read online was posted by a bot, or by a foreign state seeking to destabilise British democracy. But as soon as set himself on the path to radicalisation, it was easy enough for him to find material that reinforced his skewed and hateful world view. Places like the Britain First Facebook page have a long record of sharing videos and posts that demonise Islam and perpetuate the narrative that Europe is being threatened by anyone that happens to be Muslim, no matter where they come from. WIRED’s recent investigation into a clutch of far-right Facebook groups with a potential reach of millions demonstrates that you don’t have to look hard to find pages that exist to radicalise others online, and do so with impunity.
Read more: Twitter has admitted Russian trolls targeted the Brexit vote (a little bit)
When Russian influence has been at its most successful, it has simply amplified beliefs that are already held. It's a strategy that isn't all that different to other online influencers. During the aftermath of the 2017 Westminster Bridge terror attack in London, a photograph emerged of a Muslim woman looking away in horror. But it was taken out of context and made to show her being more interested in her phone.
The post from the @SouthLoneStar Twitter account – which we now know was run by Russian propagandists – went viral. Separately, analysis by The Guardian showed that Russian accounts were quoted more than 80 times in UK media. These references included appearances in roundups of the best Twitter jokes and news reports.
Even though Facebook hasn’t found any further evidence of Russian interference with the EU referendum, that shouldn’t mean the social media giant is off the hook. “We have very little idea really of what exactly is going on at Facebook – we have to take their word for it,” says Krasodomski-Jones. Facebook has been allowed to conduct its own investigation into itself, and, without revealing its methodology, has come up with nothing to incriminate it beyond a trio of adverts linked to Russian accounts.
It’s entirely possible that there is nothing more to uncover about Facebook, Russian bots, and the EU referendum, given the narrow remit the Department for Culture, Media and Sport set for its investigation. And focusing too much on Russian bots may put us in danger of becoming online conspiracy theorists, says Krasodomski-Jones. “Russia has clearly made attempts to destabilise western democracy, and destabilise the EU, but we should be careful not to get too hung up on that and take our eye off of serious socio-economic problems in our backyard,” he says.
Russian bots would be a simple answer to a complex question. As it turns out, a large portion of the extreme content on social media is home-grown. Britain First, Tommy Robinson and others use social media because it’s an easy place to broadcast to a self-selective audience that shares your point of view, or might come around to it with enough persuading.
Read more: How religious extremists gamed Facebook to target millions of Britons with far-right propaganda
This, Facebook tends to argue, is exactly the point. Its platform allows a diverse range of views and – as long as they don’t violate its policies on hate speech or abuse – it’s all good. Social media, the argument goes, is a place where people with opposing views come together to debate between themselves. Social media isn’t undermining democracy, it is democracy.
But Krasodomski-Jones isn’t convinced by this line of argument. Facebook hasn’t created an open marketplace for ideas, he says, it’s created a space where people who want to hear extremist views can easily find them and hear them to the exclusion of all other points of views. “Speech is fundamentally uncontested online,” he says. “What Facebook does is create a space in which there is no competition for ideas.”
That problem is much bigger than the question of Russian interference, and there are no obvious answers. Facebook’s very design makes it easy for people to create platforms of radicalisation, populated with real users, that fragment us into ever more extreme ideological echo chambers. “We’re dealing with a problem that is even bigger than the platforms,” says Krasodomski-Jones. “It’s a problem with communication in the 21st century.”
This article was originally published by WIRED UK