The UK’s war on online harms seems destined to fudge and fail

Despite the bluster, the draconian regulation might end up being little more than grandstanding
WIRED

If all goes to plan, the Online Harms White Paper will turn out to have been an elaborate fudge. That’s the only reasonable reaction to the UK government’s ostensibly landmark plan for crushing down on the spread of illegal content on the internet.

Launched with a lot of fanfare in April 2019. At the time, it was largely perceived to be a leadership vehicle for one of the cabinet ministers sponsoring it – former Chancellor and then-Home Secretary Sajid Javid.

The 98-page paper sketched out a doctrine so over-the-top it was almost comical. Its promise was ridding the internet of a series of “harms” – the list, as per a handy table, included: child exploitation; terrorist content; organised immigration crime; modern slavery; extreme pornography; revenge pornography; harassment and cyberstalking; hate crime; encouragement to suicide; violence incitement; sales of weapons or drugs; prisoners’ use of the internet; and sexting between minors. Overseeing all of this, we have now learned, will be poor old Ofcom.

It’s an unenviable task. Online platforms hosting user-generated content – essentially every website allowing users to interact and share material – would be burdened with a new “duty of care”. That would require that they protect their users, especially children, from viewing posts featuring those kinds of illegal content. Failing to do so would result in fines – which is pretty par for the course – but the paper also suggested that the executives of offending companies might face civil and criminal prosecution, and that in extreme cases the government might resort to ISP blocking, i.e. taking the website offline in the UK.

It’s an astonishingly broad remit for a regulator that, until now, has overseen television and radio services, making sure among other things that they don’t broadcast offensive, illegal or otherwise harmful material.

Ofcom’s being tasked with the new duty was widely expected, so it wasn’t a massive surprise to anyone who had been following the story closely. But some details in the consultation papers in which the government made the announcement were more compelling. They seem to point in one direction: the government might be softening its stance – or maybe it was bluster all along.

First – the scope of the regulation. When the White Paper was announced, essentially everyone who runs a website featuring reviews or comment sections started to fret. Would they have to shut that down? Would they have to adopt automated content-filters to avoid being in breach? Would they be arrested? The answer, turns out, is “probably not”. Executives at MailOnline can breathe a sigh of relief.

“The legislation will only apply to companies that provide services or use functionality on their websites which facilitate the sharing of user generated content or user interactions, for example through comments, forums or video sharing,” the consultation paper says. It adds that, even when smaller businesses are affected, the government will try “minimising the regulatory burden” on them. In other words, this is probably going to be about social networks, even if it could conceivably apply to other kinds of businesses such as newspapers’ comment sections, or, more to the point, adult websites like PornHub.

Second: its new powers are still undefined, but Ofcom itself will not be elevated to the rank of internet super-police. The organism will be in charge of issuing codes of practices, oversee companies' compliance with those codes, and make sure that they deal effectively with users’ complaints. Ofcom’s auditing process is not clear yet, but each company will be required to file annual transparency reports on their “reporting processes and moderation practices”.

What is certain is that Ofcom will not deal with individual cases, but just make sure that companies meet certain criteria overall. That it is understandable – having Ofcom officials review and adjudicate on every single incident taking place on Facebook or Twitter would be plainly unfeasible. But those decrying online platforms’ self-regulation could take exception to a model of self-regulation minus-minus-minus, where a lot of the oversight’s effectiveness will hinge on internet companies’ self-assessment and good faith.

The difference with self-regulation, of course, are the penalties – the fines, and the jail terms, and the website blocking. The consultation process is still ongoing, so it might well be that all of those things find their way into the final legislation – expected by the summer. But if the views reported in the consultation papers are anything to go by, everyone in the industry is strongly opposed to any measure going further than pecuniary fines. It is fair to wonder whether the government will really decide to antagonise the totality of the tech sector just as Brexit makes its backing more vital than ever; an even more interesting question is how much vitriolic headlines the government is ready to swallow: given that online newspapers' comment sections might theoretically fall under the scope of the new regulation, editors might not take kindly to being threatened with criminal prosecution.

Right now, the government has not plumped for any given position. “We are considering the responses to the consultation on senior management liability and business disruption measures and will set out our final policy position in the Spring,” the document reads. But don’t be surprised if the final legislation is a very watered down version of what was originally presented last year. After all this was mostly about grandstanding – and with the chief grandstander now out of a job, it'll be a miracle if his legacy survives at all.

Gian Volpicelli is WIRED's politics editor. He tweets from @Gmvolpi

This article was originally published by WIRED UK