All that's wrong with the UK's crusade against online harms

From private messaging to implementation, problems abound in the government's white paper
Luke MacGregor/Bloomberg via Getty Images

How do you solve a problem like the internet? That’s the gist of the predicament at the core of the UK government’s crusade against so-called online harms. Its white paper on the issue, launched on Monday at the British Library, is a collaboration between Culture Secretary Jeremy Wright and Home Secretary Sajid Javid. First announced in late 2018, the 98-page-document maps out Britain’s plan to put an end to the era of self-regulating internet platforms.

And it’s quite the plan.

The white paper recommends making internet companies – social networks, search engines, forums, messaging services and any website allowing “users to share or discover user-generated content or interact with each other” – responsible for illegal, harmful, or otherwise disreputable content appearing on their platforms.

That of “online harms” is a broad category: Javid, eager to put his seal on the policy, mostly talked about terrorism and child sexual abuse. “I warned the web giants. I told them that keeping our children safe is my no one priority as Home Secretary,” he said at the launch of the white paper. He also mentioned the Christchurch shooting’s Facebook live-stream and Daesh bride Shamima Begum’s online radicalisation as examples of internet-generated disasters.

But the term covers a disparate array of ills, including revenge porn, hate crime, harassment, promotion of self-harm, content uploaded by prisoners, disinformation, trolling, and the sale of illegal goods.

A new independent regulator will create “codes of practice” detailing how to best deal with each of those harms; platforms that do not comply will be fined (in proportion to their revenues), and the paper suggests that they might be taken offline in the UK, and their executives might be prosecuted in civil or criminal court.

In addition, the regulator will be in charge of promoting digital literacy, encouraging platforms to make their data accessible to external researchers, besides overseeing how each company is countering online harms – on the basis of public reports filed annually by internet platforms. (Quite a thick brief, for a body that does not even exist yet.)

The white paper also suggests the adoption of measures to ensure that users who have experienced online harm can be redressed swiftly and effectively by the companies. This could be achieved either by requiring that companies review complaints according to more stringent internal criteria – set by the regulator – or by creating a new complaint process entirely independent from the company itself.

In an article on Metro, Prime Minister Theresa May extolled the white paper as the epitome of an ambitious, trailblazing effort. “I want this country to be the safest place to be online — especially for children and vulnerable people. This means internet firms must take responsibility for their content and platforms,” she wrote. “We are leading the way on this internationally.”

The document is a first step towards holding powerful internet companies to account for negligence in confronting illegal or unsavoury content on their platforms. Until recently, tech companies’ usual defence would be pettifogging about the distinction between platforms (which bear no responsibility for user-generated content) and publishers (which are responsible, as the ultimate decision-makers); the white paper just waves that distinction away. Bold, if arguably justified by recent circumstances – from the Cambridge Analytica scandal, to Christchurch.

Still, many questions about the document remain.

Is it draconian, or grandstanding?

The white paper’s main innovation is the introduction of an independent regulator, tasked with establishing good practices for internet platforms, ensure compliance, and enforce penalties in case of violations. The go-to penalty will be a fine, but the white paper toys with the idea of imposing stricter sanctions, given “the global nature of many online services and the weak economic incentives for companies to change their behaviour”.

These sanctions include “disruption of business activities” – that is, asking third-party companies to stop providing services or facilitating access to the non-compliant platform. That means that a platform found in serious breach of the code of practice would be erased from “search results, app stores, or links on social media posts.”

The nuclear option, though, is internet service provider (ISP) blocking, which would simply prevent UK users from accessing an offending website. It is hard not to call that state-sanctioned internet censorship.

Granted, ISP blocking would only be considered in case of repeated and outstanding failures to address illegal harms – which means that practices labelled as harmful but not explicitly defined and forbidden by the law, such as disinformation or trolling, would be exempt.

But dangling such a devastating penalty might push some companies to be overly cautious with the content they allow on their platforms. Javid – while assuring to take technological innovation and freedom of speech seriously – even said that platforms should use AI-powered filters to stave off the uploading of terrorist content.

Of course, the white paper will have to undergo several passages before becoming law, and it might well be watered down. Since Javid is warming up for a Tory leadership race, this could just be a dog and pony show he engineered to win over party members.

Will this apply to private conversations?

Theoretically, the measures laid down in the white paper apply to “social media platforms, file hosting sites, public discussion forums, messaging services, and search engines”. Does it mean that our private WhatsApp or Facebook Messenger conversation will also fall within the purview of the new regulator – and possibly be monitored, disrupted, and censored? Unclear.

“Reflecting the importance of privacy, the framework will also ensure a differentiated approach for private communication, meaning any requirements to scan or monitor content for tightly defined categories of illegal content will not apply to private channels.”

Consultations on how to apply the white paper’s rules to private messaging platforms are ongoing.

Does it apply to news websites?

One of the digital harms outlined in the white paper is “Online abuse of public figures” – the intimidations, insults, and threats levelled at politicians, celebrities, or journalists on Twitter and elsewhere. The section cites research from The Guardian, demonstrating how most of the abusive comments left on its website were directed at female or black journalists.

The citation raises the question of whether The Guardian – and any other news website featuring a comment section – might be regarded as platforms for user-generated content, required to follow the new code of practice, and penalised for not policing harmful comments.

At the British Library, answering a question by a Guardian's journalist, Wright ensured that that would not be the case. “[What we are interested in] is user-generated content where there is no other control of that behaviour. The activities of newspapers and broadcasters are already regulated.” The letter of the white paper, nonetheless, leaves ample room for ambiguity.

How long will it take?

According to what Wright said at the British Library launch, it will take about two years before this white paper is converted into some concrete law – and probably even longer before everything is set-up for implementation. Think about what the internet looked like two years ago: the Cambridge Analytica story had not broken yet, Facebook’s reputation was still relatively untarnished, and the horrific potential of live-streamed violence was just emerging. Will the white paper still be relevant for regulating the internet in 2021?

This article was originally published by WIRED UK