Passing laws to force Facebook to fix fake news is asking for trouble

Regulating internet platforms is necessary. It is also a can of worms
JOHN THYS/AFP/Getty Images

It’s been quite a ride. When the Select Committee for Digital, Culture, Media and Sport investigation into disinformation and “fake news” took off, eighteen months ago, most of Britain was still blissfully unfamiliar with entities such as Cambridge Analytica and AIQ, people like Alexander Nix or Chris Wylie, and concepts like microtargeting or lookalike audiences.

But almost everyone in the UK was well acquainted with what would be branded the real villain of the whole scandal: Facebook. If anyone comes out of the report looking squarely bad – worse than the scheming toffs at Cambridge Analytica, worse than the hard-nosed politicos, worse, even, than Russian trolls – that are Mark Zuckerberg and his company. Nearly every sentence of the Committee’s report is devoted to expounding on Facebook’s nefariousness, obfuscation, and unreliability. Labelled a “digital gangster”, Facebook is accused of failing to tackle the spread of Russian disinformation, of stonewalling the Committee's inquiry, and of handling its users’ data with cavalier disregard and outright avariciousness, among other things.

What the report makes abundantly clear is that the era of self-regulating technology companies, a notion that was tacitly accepted in more techno-optimistic days, must end. States have to step in, approve a Code of Ethics for internet companies, and hit violators – chief among them Facebook – with massive fines.

It makes sense. Whatever you think of Facebook – whether you go along with the Committee’s scathing assessment, or you regard it as a flawed but not diabolical corporation grappling with intractable issues – the report, and recent history, paint the picture of a company that reacts to the problems marring its platform only when it is too late. The one thing that does seem to prompt Facebook to pre-empt, rather than react, is threatening to shake it down for money.

The Commons’ report cites Germany and France as success stories in this respect. In 2017, the German Bundestag passed the NetzDG, a law that imposed hefty fines of up to €50 million on social media companies that failed to remove “manifestly unlawful” posts – including hate speech, incitement to crime, and defamatory content (which in some cases coincides with “fake news”) – within 24 hours. Facebook reacted by hiring German-speaking content moderators in spades: according to the Commons’ report, “one in six of Facebook’s moderators now works in Germany, which is practical evidence that legislation can work.”

It did work – to the extent that it forced Facebook to act. But it also entrusted Facebook’s moderator army with making snap judgements on what is and what is not “manifestly unlawful”; unsurprisingly, the risk of stonking fines led Facebook’s moderators to err too much on the side of caution, removing dozens of disagreeable but not illegal posts.

France’s answer to NetzDG, passed in 2018 amid a hail of criticism, is more targeted on the “fake news” phenomenon. For three months ahead of every election, it allows French judges to order the immediate removal of online disinformation, and to impose fines of up to €75,000 in case of violation.

It could be argued that the French model is superior to the German, as it vests the ultimate power of deciding what should be removed in judges, rather than in Facebook employees. The truth is that both models are tricky – and, in fact, any attempt to solve disinformation by law is a guaranteed mess.

The German law, for instance, was put to the test in May 2018, when a judge ordered Facebook to make unlawful content invisible to anyone connecting from within the borders of Germany. That included people who used VPNs to skirt Facebook's geolocation-based block, which made offending posts invisible to German IPs – but not to people in other countries. The only way to comply with the judge's order, therefore, was to just delete the posts altogether, which Facebook did.

Fair enough, if one thinks that most posts removed under the German law would have fallen foul of Facebook's community standards anyway. But it is also worrying that one country might have the power to decide what content should and should not be accessible to users in every corner of the planet.

It might sound all right when the judge making the call on what constitutes “unlawful content” or “disinformation” is German, French or, if the DCMS Committee’s recommendations go anywhere, British. It might be a tad more unsavoury when the request to wipe off a post – potentially at a global level – comes from Russia, Venezuela, or the Philippines, all of which have taken steps to implement a German-like legal framework to counter disinformation. A global struggle over the very meaning of information and disinformation is not inconceivable.

Facebook’s inertia in the face of disinformation has been bad for democracy, and has deservedly earned it a bad reputation. Regulating Facebook – and other internet platforms – is a necessity. But let’s not fool ourselves that things will just become easy after that.

This article was originally published by WIRED UK