Theresa May: tech firms may be fined if they don't remove terror content

The UK will work with France to help create a 'legal liability' for tackling terrorism-related posts
Getty Images / WPA Pool / Pool

Prime minister Theresa May has confirmed the UK will look to create a new 'legal liability' that could see Facebook, Google, Twitter and other leading tech firms fined if they don't remove "unacceptable content" from their websites.

Read more: Theresa May's comments on regulating 'cyberspace' are 'intellectually lazy', claim critics

In a brief statement issued by the Prime Minister's office, May says the UK has formed a partnership with France to "tackle online radicalisation". The statement says the two countries aim to take "much stronger action against tech companies that fail to remove unacceptable content".

"The UK and France will work together to encourage corporations to do more and abide by their social responsibility to step up their efforts to remove harmful content from their networks, including exploring the possibility of creating a new legal liability for tech companies if they fail to remove unacceptable content," May says.

The legal liability would include the potential for firms to be fined if they don't remove questionable content. May adds that the UK government will keep working with technology companies and wants to help them develop "tools" that can "identify and remove harmful material automatically".

The announcement follows multiple suggestions from May and Conservative Party colleagues that "cyberspace" should be regulated. In the build-up to the June snap general election and following both the Manchester and London Bridge terror attacks, May said there should be no "safe space" for those planning terror attacks to talk online.

Subscribe to WIRED

However, many of the regulation suggestions have been criticised for being heavy-handed. "I think it's mostly interesting in what it doesn't say," Eerke Boiten, a professor of cybersecurity at De Montfort University, tells WIRED "Sanctions if tech companies 'fail to take action' - but when? After they have been notified?"

Boiten adds that the approach from the UK may be "inspired" by online extremism measures that are being developed in Europe. In April, government ministers in Germany approved plans to fine social media companies up to €50 million (£42.7m) if they fail to remove hate speech and fake news quickly.

Read more: Protect your privacy by moving to the dark web

The plans in Germany would allow companies 24 hours to block content once it had been reported to them. If they don't, it would be possible for authorities to fine them. At the time, a Facebook spokesperson told the BBC the law would "force private companies rather than the courts" to judge what is illegal.

Following May's suggestions – in June after the London Bridge terror attack – that social media companies should be doing more to stop hate speech online, the companies reacted quickly. Facebook, Twitter, and Google all said they were already work with the government on countering online radicalisation and remove thousands of accounts posting illegal content.

Boiten adds that removing illegal content "automatically" from social networks will prove a problem when attempted in the real-world. "It would also falsely imply the existence of tools which can check content for 'extremism' automatically – whereas, in reality, automatic techniques either have to let through a lot of dubious material or overreach to the extent that it should raise censorship worries".

This article was originally published by WIRED UK