The strange story of Section 230, the obscure law that created our flawed, broken internet

In 1995, it looked as though we might be heading towards an internet where censorship and moderation ruled supreme. Then along came Section 230

In the immediate aftermath of the Christchurch attack, as Facebook and YouTube struggled to cope with the horrific footage flooding their platforms, it appeared that the internet was fundamentally broken. At YouTube, moderators scrambled to remove videos uploaded at a rate of one every second while Facebook blocked or removed 1.5 million videos in the 24 hours after the attack.

But the internet wasn’t broken. It was working exactly as designed.

For most of the web’s history there have been two golden rules: you can put whatever you like online, but the company hosting your speech can also take down whatever it doesn’t like. These rules have enabled the internet we have today – a place where almost all of the content is provided by users. None of the internet giants – Google, Twitter, Facebook, Amazon – would exist without it.

None of this was inevitable. By 1995 it looked certain that we were heading towards an internet where obscenity and indecency were censored and moderating online comments would leave companies open to multi-million dollar lawsuits.

But then an unlikely series of events changed the direction of the internet forever. It is a story that involves the earliest trolls, America’s most deadly domestic terror attack and a pair of politicians from across the aisle desperate to bridge what they thought of as the hyper-partisan politics of the mid-1990s. And at the heart of it all, are 26 words that gave us the internet we have today.

Representative Chris Cox was 36,000 feet above America when he flipped open the Wall Street Journal and happened across an article that would end up shaping the modern internet. It was the spring of 1995 and Cox – a Republican member of the US House of Representatives – was flying from Washington D.C. back to his home state, California.

Cox had landed on an article about a ruling by the New York Supreme Court. The case involved an online message board run by Prodigy – a now-defunct firm that at the time ran one of America’s largest websites. At the heart of the case was a post on a message board in which an unknown user claimed that Stratton Oakmont – the New York brokerage firm later immortalised in The Wolf of Wall Street – and its president had committed fraud.

Stratton Oakmont sued Prodigy for defamation, and the case boiled down to a simple question: was Prodigy legally responsible for something one of its users posted? The problem went right into the weeds of defamation law. If Prodigy was considered a “publisher” of the comments, then it was responsible. If it was a mere “distributor” it was off the hook.

In May 1995 to court decided that Prodigy was liable for the defamatory statements – arguing that because the firm had content guidelines and used software to remove offensive language, it was legally responsible for the content of those posts. This reasoning rankled Cox. Why should companies be penalised for trying to moderate online content?

A similar case in 1991 had gone the other way because the defendant – Compuserve – had made no effort to review the vast amount of content on its forums. This left online firms in an odd position. If they moderated online content, they were liable for anything their users said. If they wanted to dodge that risk, all they had to do was to avoid moderating anything at all.

To Cox, this reasoning seemed perfectly backwards. He wanted a cleaner web, where it was harder for people to stumble across obscene or offensive content. If companies who tried to moderate up the web got slapped with multi-million dollar lawsuits, who would want to clean it up? “It struck me that if that rule was going to take hold then the internet would become the Wild West and nobody would have any incentive to keep the internet civil,” he says.

But the internet of 1995 was different from today’s in almost every way imaginable. Back then, there were only 16 million people online in the entire world. Mark Zuckerberg had only just started middle school and Google wouldn’t exist for another three years. People who were talking online mostly gathered in niche special-interest boards to chat about movies, games or personal finance.

Then the user-generated internet arrived. Instead of niche interest groups, we had people like Zuckerberg talking about creating the “town square” of the internet. Now Facebook has 2.2 billion monthly users. Jeff Bezos built a trillion dollar company based on the realisation that instead of selling books, he could make more money connecting buyers with third-party sellers. Google sprang up to direct people to user-generated content all over the web. All of this governed by a set of rules devised for an earlier, simpler internet.

Cox, who in 1995 had recently been elected to the Republican leadership of the House, had been on the hunt for a piece of legislation that both parties could agree on. He realised that this Wall Street Journal article could be it. Together with a Democratic Representative from Oregon, Ron Wyden, he wrote a small addition to the Telecommunications Act – a major overhaul to US law that attempted to address the question of internet regulation for the first time.

The result was Section 230. In 26 short words, the legislation sketched out the future of the internet. The heart of the legislation was a provision that made it clear that online platforms were not responsible for material that their users posted online. “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” it reads.

But equally important is a ‘good samaritan’ clause that gives the platforms the ultimate say over what they do, or do not, allow online. Platforms would be free to remove any content they considered to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”

Together, these two parts of Section 230 set the goalposts of the internet we have today. Zuckerberg’s dream of a global town square is right there in embryonic form in Section 230. It gave internet startups and their investors the confidence that they could fill their platforms with content from ordinary users, without attracting any legal liability for anything those users might write. And it allowed Zuckerberg to set the terms of his town square. If he didn’t want a particular user on his site – or a certain kind of speech – he could remove it, no questions asked.

“Its role in enabling a certain kind of technical innovation is unambiguous,” says Daphne Keller at Stanford Law School’s Center for Internet and Society. “It made it possible for investors to get behind companies who were in the business of transmitting so much speech and information that they couldn't possibly assess it all and figure what was legal or illegal.”

It is hard to overstate how foundational Section 230 has been for enabling all kinds of online innovations. It’s why Amazon can exist, even when third-party sellers flog Nazi memorabilia and dangerous medical misinformation. It’s why YouTube can exist, even when paedophiles flood the comment sections of videos. And it’s why Facebook can exist even when a terrorist uses the platform to stream the massacre of innocent people. It allows for the removal of all of these bad things, without forcing the platforms to be legally responsible for them.

These all might sound like terrible things – and you’d have a point. But a world without Section 230 could well make all of these things worse. “If we were to adopt a policy that sites were liable for the third-party content they publish they'd do one of two things,” says Eric Goldman, a scholar of Section 230 and a professor at Santa Clara University School of Law. “They'd either not publish at all or they'd look for ways to turn over responsibility to other people.”

In other words, if the law didn’t let platforms moderate to their heart’s content, we’d end up with an internet with almost no moderation at all. It’d be as if all websites with user-generated content were modelled on 8chan, Gab, or the worst parts of Reddit. Without Section 230, Goldman says, you’d have an endless online free-for-all. “If you turn over control to the crazies, the crazies will win.”

Representative Chris Cox was a driving force behind Section 230 when it was conceived in the mid-1990sPAUL J. RICHARDS/AFP/Getty Images

In February 1996, a stroke Bill Clinton’s pen turned Cox and Wyden’s dream of taming the Wild West of the internet into law. But it still wasn’t quite clear how Section 230 would work in reality. Would courts take a broad interpretation of Section 230 – finding that companies should only be held liable for content in very specific circumstances – or would they argue that the scope of Section 230 didn’t cover certain kinds of content?

Twenty months after Clinton signed the Act, a case came along that would start to provide these answers. It involved a terrorist bombing, a canonical example of online trolling, and one of the greatest unsolved mysteries on the internet. Who framed Kenneth Zeran?

On April 19, 1995, two domestic terrorists bombed a federal government building in downtown Oklahoma City. In total, 186 people were killed with more than 500 injured, making it the deadliest terrorist atrocity on American soil until September 11, 2001. Six days later, a message appeared on an AOL bulletin board selling t-shirts with messages mocking the bombing such as “Visit Oklahoma… It’s a BLAST !!!” The post invited readers to call “Ken” and included the real phone number of Kenneth Zeran, who until the moment that calls started flooding in had no idea about the advert. He didn’t even have an AOL account.

By the end of the month, Zeran was receiving abusive calls at a rate of one every two minutes – something that was made worse by a local radio host who encouraged listeners to ring Ken and give him a piece of their mind. “Problem was, it wasn’t posted by Ken. It was posted by an anonymous poster. We don't know who, they've never been found,” says Goldman.

A year later, in April 1996, Zeran sued AOL, alleging that the firm had been negligent in refusing to remove the malicious information about him. After a lower court decided in AOL’s favour the case eventually made its way to the Fourth Circuit appeals court in Richmond, Virginia. This was a decisive moment for Section 230. The Fourth Circuit sits just below the US Supreme Court in terms of importance and its decisions set influential precedents for how later cases will be handed. To Zeran’s disappointment, the judge’s ruling was unequivocally in favour of AOL. Section 230 had won again.

“Whatever decision [AOL] made – to leave the post up or take it down – was equally protected by Section 230,” Goldman says. This was the first time such a senior court had interpreted the new law, and it set a legal precedent for how Section 230 would be viewed in subsequent cases. The Fourth Circuit’s reading of Section 230 was generous, saying that even if AOL was aware of the malicious content and did nothing about it, Section 230 still absolved the company of any legal liability. It hinted that when it came to online free speech, Section 230 would be on the side of online platforms.

Zeran, who maintains that he was the victim of a random attack, has become an accidental keystone in the world of internet speech law. In 2011, Goldman invited him to speak at a conference he hosted to celebrate the fifteenth anniversary of Section 230. “Unfortunately he used [his speech] not to tell the details of his story but to rail against section 230 and propose a legislative agenda – it wasn't really the right audience for that.”

Although to Zeran’s chagrin, his legal battle laid the groundwork for countless later defenses on the basis of Section 230. Big tech platforms have invoked Section 230 dozens of times, Goldman says, and now the law is so well known that most lawyers will not even bother bringing such cases against big platforms. Section 230 was later used to uphold eBay’s immunity when a user sold forged memorabilia on the platform and Myspace’s immunity when a 13-year-old girl was sexually assaulted by someone she met on the platform, after lying about her age online.

For Jennifer Granick, surveillance and cybersecurity counsel at the American Civil Liberties Union, this legal protection gave internet startups the room they needed to flourish online. “There is no way you can have a YouTube where somebody needs to watch every video. There is no way you can have a Facebook if somebody needs to watch every post. There would be no Google if someone had to check every search result,” she says. “It sounds very simple, but the reason the internet is what it is today is because of that. Without question.”

Facebook, which has 2.2 billion monthly users, likely wouldn't exist were it now for the protections provided by Section 230Getty Images

But 24 years after its conception, the law that made the internet is starting to creak under the strain of platforms of unprecedented size and influence. As social networks have attracted increasing flak for their for their roles in enabling electoral interference, misinformation and online hatred, Section 230 has itself come under fire. Once a proud example of bipartisan, pro-innovation legislation, Goldman says that Section 230 is now regularly disparaged on both sides of the aisle. “Love for Section 230 is at the lowest point ever and I don’t see how it's going to turn around and trend upward again.”

In April 2018, a pair of controversial bills aimed at curbing sex trafficking online became law in the US. FOSTA-SESTA carves out an exception to Section 230 that means that online services are exposed to legal liability if third parties are found to be posting adverts for prostitution – including consensual sex work – on their platforms. When it came to the Senate vote on FOSTA-SESTA, only Ron Wyden and Rand Paul voted against the bill.

Critics of FOSTA-SESTA argue that the law stifles the expression of political, sexual and artistic speech online without improving the problem of sex trafficking. Daphne Keller is one of a number of lawyers challenging FOSTA-SESTA as unconstitutional. “If you wanted to change Section 230 and give platforms more takedown obligations there are smart ways and dumb way to do it, and FOSTA was a dumb way to do it. It's really badly drafted and really hard to tell what platforms are supposed to do,” she says.

In response to the law, the online classifieds advert site Craiglist removed its personal ads section because it was concerned that it could fall foul of FOSTA-SESTA. “They just categorically gave up rather than try and navigate the liability scheme that they were facing,” Goldman says. And in December 2018 Tumblr started blocking all porn on its website in an effort to comply with the bill.

While legislation like FOSTA-SESTA chips away at the edges of Section 230, in the halls of Congress some lawmakers seem to be sharpening their knives against free speech online even more overtly. In March 2019, Representative Devin Nunes filed a lawsuit against Twitter and two parody accounts for defamation and political bias against conservatives. Of course, as scholars of Section 230 know, Twitter can remove anything it likes, or leave it online, free from legal liability, but Goldman says that Nunes’ suit points to a dangerous disregard for these online protections. Nunes doesn’t need to sue Twitter to hurt it, he points out, he can introduce legislation to attack Section 230.

And further afield, other countries are starting to weigh in on how speech should be regulated online, shifting the balance of power away from the US and towards Europe for the first time in the internet’s history. In May 2016 Twitter, Facebook, YouTube and Microsoft agreed to update their terms of service to fall into line with the European Commission’s code of conduct against hate speech. This meant that each of these companies essentially agreed to apply European rules to every one of their users globally.

In the UK, the government is considering legislation that could force social media firms to do more to tackle a range of issues, from cyber bullying to child sexual exploitation. In Germany, a law came into effect in 2018 that requires social media websites to remove hate speech within 24 hours after becoming aware of its existence.

Eager to regulation in Europe, tech firms may well find themselves voluntarily rolling out similar rules globally – as was the case with the European Commission’s hate speech code of conduct. Keller says that this has shifted the balance of power over speech online in favour of Europe. “The net exporter of speech rules now is Europe,” she says.

But there’s no guarantee that tighter moderation laws will solve any of the problems the internet is facing. The dream of regulating the internet into harmony might be based on the assumption that the internet is to blame for what are broader societal misgivings. “Many of the things that people have complained about is when the internet is acting as a mirror, not an accelerator, so the internet takes the fall in their minds but fixing the internet doesn't fix the problem at all. It just takes away the mirror,” Goldman says.

We might like the sound of global public square, but when there are two billion people on a platform, that just leaves a lot of people to disagree with. “There's not a good solution for the global media networks that are trying to create spaces that we all cohabitate. I think that's a losing game – I don't see how we can let that happen without someone getting really really upset,” he says.

For Keller, the answer to the internet’s problems might not be tighter speech laws, but more effective competition law. “If there were 10 different Facebooks and they all had different rules about speech – what gets taken down and what stays up, and what gets amplified with the algorithm – then none of them would be as important,” she says. Rather than feeling like we lived under the control of a handful of companies with a rigid set of rules, we could find our own spaces online with speech rules that reflect our personal viewpoints more closely.

For Goldman, the answer isn’t in re-jigging the law but re-thinking how our social networks work. “If you’re going to have an environment where a both pro- and an anti-vaxxer content is shared and sitting next to each other, that community is not going to succeed,” he says. “I don't think it's possible for them to talk to each other.”

But is it possible to put the social network genie back into the bottle? “The web as it exists today is a different animal,” Cox says. His attempt to make the web a more wholesome place for his children helped enable the rise of huge platforms and reveal the unprecedented scale of human intolerance and hatred. Perhaps we got the internet we deserved after all.

This article was originally published by WIRED UK