Facebook could block far-right hate speech, so why isn't it?

Facebook, Google and Twitter have successfully pushed Islamic State off their platforms, which makes hand-wringing over tackling the far-right hard to understand
Getty

At its peak in 2015, Islamic State held vast swathes of western Iraq and eastern Syria, territory that contained as many as eight million people. But its influence went far beyond the towns it had captured. Online, Isis proselytised with impunity, sharing propaganda on social media at an astonishing rate. In an average week, its media wing could churn out around two hundred different pieces of content, from videos to written reports of recent battles.

Now, Isis is a shadow of its former self online. This is partly due to the terrorist group’s decline offline, but it’s also due to a concerted effort from the world’s biggest tech companies to force the group off of their platforms. In the first half of 2016, Twitter suspended 235,000 accounts that promoted terrorism online. At the end of November 2017, Facebook said it was successfully using artificial intelligence to automatically detect terrorism-related posts. Once a piece of terrorist content was flagged, the company removed 83 per cent of the material within an hour of it being posted. Facebook says that 99 per cent of material it removes is first detected by itself rather than reported by its users.

But this week, representatives from Twitter, Facebook and Google squirmed before the Home Affairs Select Committee as MPs accused them of not doing enough to remove far-right hatred and threats. “The police have said very clearly they are extremely worried about online radicalisation and grooming. Isn't the real truth that your algorithms and the way you want to attract people to look at other linked and connected things [...] are doing that grooming and radicalising,” said Committee chair Yvette Cooper.

Cooper read out one anti-semitic tweet put to Twitter last time the social media companies appeared before the Committee, in March this year. At the time, Twitter’s UK director of public policy, Nick Pickles, agreed that the tweet was a clear violation of Twitter’s rules and should not be on the platform. That tweet is still online. “What is that we have to do to get you to take it down?” Cooper asked Twitter’s vice president of public policy and communications, Sinead McSweeney.

Normally, tweets that violate Twitter’s rules are taken down within a day or so, McSweeney said, adding that the company was now taking action against ten times as many accounts as it had in the past. Facebook’s representative told the Committee that it now had 7,500 people reviewing content. Yesterday, Twitter suspended the accounts of Jayda Fransen and Paul Golding, the leaders of racist hate group Britain First, as well the its official account. YouTube has placed restrictions on Britain First videos, warning users that they contain “content has been identified by the YouTube community as inappropriate or offensive to some audiences,” and pulling advertising from others.

Despite these efforts, it is manifestly clear that these companies are not doing enough to keep hatred and threats off of their platforms. For too long the onus has been on individual users to report violent and hateful content – and as the tweet Cooper mentioned makes clear, often those reports are ignored.

Time and time again, representatives from these companies tell us that their failures are because of the technical challenges that come along with rooting out hate at such a vast scale. But their success at combating Isis on their platforms suggests that they could do much more to take on hate if they only wanted to. So what gives?

When pressed on why Facebook hadn’t followed Twitter’s lead and banned Britain First from its site, the company’s UK policy director Simon Milner told the Committee that it had to be “very cautious” because Britain First was a registered political party. Milner is actually mistaken, Britain First was deregistered as a political party by the electoral commission earlier this month, but Facebook’s unwillingness to take on Britain First – an organisation that has shared videos from Isis – speaks volumes about Facebook’s attitude to its responsibilities.

By refusing to remove pages that deliberately share fake content in order to stir up hatred against Islam, Facebook is only fuelling this disturbing brand of extremism, while its algorithms actively seek out people who might be receptive to those views. Facebook’s ad service allows people to target posts directly at people who have expressed an interest in far-right politics. If you aim an ad at the two million people who like Britain First on Facebook, its algorithms suggest tens of millions of other people who have expressed an interest in similar topics.

As much as Facebook might outwardly dislike the kind of content shared by Britain First, the social media giant is well aware that clicks and shares are its lifeblood. Look under any Britain First post and you’ll see hundreds of comments and shares from people who go to Facebook to see exactly the kind of content that Britain First delivers so readily. While Twitter and YouTube are have taken a tentative step towards cutting off extremists from the audiences they crave, Facebook – for now – refuses to do the same.

This article was originally published by WIRED UK