Fix Section 230 and hold tech companies to account

Making social media sites act on content such as nonconsensual porn and violent conspiracies would make the internet more equal and free

In the internet’s earliest days, all we saw was its promise. The nascent technology was celebrated for its ability to connect strangers and to ease the purchase of staples. Internet evangelizers promised a transformation in public dialogue and commerce – if law got out of the way, just as it had when the early railways and steam-run factories got their start. American lawmakers sided with the new inventors, young men (yup, all men) who made assurances that they could be trusted with our safety and privacy. In 1996, US Congress passed Section 230 of the Communications Decency Act, which secured a legal shield for online service providers that under- or over-filtered third-party content (so long as aggressive filtering was done in good faith). It meant that tech companies were immune to lawsuits when they removed, or didn’t remove, something a third party posted on their platforms.

But, thanks to overbroad court rulings, Section 230 ended up creating a law-free zone. The US has the ignominious distinction of being a safe haven for firms hosting illegality. This isn’t just an American pathology: Because the dominant social media companies are global, illegality they host impacts people worldwide. Indeed, safety ministers in South Korea and Australia tell me that they can help their citizens only so much, since abuse is often hosted on American platforms. Section 230 is to social media companies what the Cayman Islands has long been to the banking industry.

Tech companies amplify damaging lies, violent conspiracies and privacy invasions because they generate copious ad revenue from the likes, clicks and shares. For them, the only risk is bad PR, which can be swiftly dispatched with removals, bans and apologies. For individuals and society, the costs are steep. Lies about mask wearing during the Covid-19 pandemic led to a public health disaster and death. Plans hatched on social media led to an assault on the US Capitol. Online abuse, which disproportionately targets women and minorities, silences victims and upends careers and lives.

Social media companies generally have speech policies, but content moderation is often a shell game. Companies don’t explain in detail what their content policies mean, and accountability for their decisions isn’t really a thing. Safety and privacy aren’t profitable: taking down content and removing individuals deprives them of monetizable eyes and ears (and their data). Yes, that federal law gave us social media, but it came with a heavy price.

The time for having stars in our eyes about online connectivity is long over. Tech companies no longer need a subsidy to ensure future technological progress. If anything, that subsidy has impaired technological developments that are good for companies and society. All companies have had to think about is the optimization of ad revenue. They haven’t had to bear legal responsibility for harm that their rush to ad revenues wrought on democracy, public health and individuals who were being denied an equal chance to work, speak and live with dignity.

We should keep Section 230 – it provides an incentive for companies to engage in monitoring – but condition it on reasonable content moderation practices that address illegality causing harm. Companies would design their services and practices knowing that they might have to defend against lawsuits unless they could show that they earned the federal legal shield. For the worst of the worst actors (such as sites devoted to nonconsensual porn or illegal gun sales), escaping liability would be tough. It’s hard to show that you have engaged in reasonable content moderation practices if hosting illegality is your business model. For the dominant platforms, this would mean ensuring that they had reasonable policies and procedures to deal with illegality causing serious harm. Over time, courts would rule on cases to show what reasonableness means, just as courts do in other areas of the law, from tort and data security to criminal procedure.

Companies couldn’t hide behind policies – they would have to show that they adopted and maintained reasonable procedures to deal with illegality such as nonconsensual porn, lies causing public health emergencies, violent conspiracies and illegal gun deals. They would have to adapt those procedures to respond to changing threats – today’s nonconsensual porn is tomorrow’s sexual assault in virtual reality. And, crucially, they would have to update their policies and procedures to account for better ways to address illegality.

In the near future, we would see social media companies adopt speech policies and practices that sideline, deemphasize or remove illegality rather than optimise to spread it. Firms won’t say that they have a policy and enforce it only when it generates bad press – that cycle, exemplified by Twitter and Facebook’s belated response to President Trump’s destructive lies and harassment of individuals, would end. There wouldn’t be thousands of sites devoted to nonconsensual porn, deepfake sex videos and illegal gun sales. That world would be far safer and freer for women and minorities.

This article was originally published by WIRED UK