The way tech companies deal with online harassment and abuse is broken. YouTube allows anti-Semitism to stay live. Twitter waffles as targeted harassment runs rampant. Facebook takes down an iconic photo that shouldn’t be banned. Now one German politician is tired of letting platforms make excuses.
Heiko Mass, Germany’s minister of justice and consumer protection, said this week that he will propose a law that would fine social media companies up to €50 million ($53 million) for not responding quickly enough to reports of illegal content or hate speech. The law would require social media platforms to come up with ways to make it easy for users to report hateful content. Companies would have 24 hours to respond to "obviously criminal content" or a week for more ambiguous cases.
It’s an intriguing idea, and on one level very satisfying. These platforms have failed to uphold effective standards, and now the authorities will force them to act. But in practice, untangling the rights and responsibilities of platforms, governments, and users isn't so simple. Tech companies should work to tamp down hate speech. But no one has a good answer to the questions of how they should do it and how far they should go. Government force acts as a blunt instrument that could hamper the more complex work of democracies coming to terms with who shares what responsibilities when it comes to these platforms that have come to define so much of 21st-century social existence.
It's time to imagine a new system, says Tarleton Gillespie, a social media researcher at Microsoft Research: "Platforms should be, theoretically, as open as possible to user contributions, but they also have a greater responsibility for the whole that is the sum of all those contributions."
In the US, citizens have the freedom to say what they want without fear of government censorship. That same right gives internet companies the freedom not to act as a hate-speech venue if they choose. Still, companies have regularly refused to go that far. "Google, Facebook, and Twitter are US companies," says Stefan Heumann, co-director of Stiftung Neue Verantwortung, a Berlin-based think tank focused on technology and public policy. "The rules they set regarding speech reflect US constitutional and cultural values—freedom of speech is treated as nearly an absolute right."
Not so in Europe, Heumann says, especially in Germany, where the memory of Nazism led to bans of hate speech and access to extremist propaganda. "There is freedom of speech,” says Volker Berghahn, a historian of German and modern European history at Columbia University, “but within limits of laws and court decisions that were promulgated against the background of bitter historical experiences." Unsurprisingly, that legacy has found its way online. In May, the European Commission set forth an anti-hate speech code of conduct, and it enlisted US tech giants from Facebook and YouTube to Twitter and Microsoft to take part in the fight.
But placing too much responsibility on the shoulders of private sector companies carries its own danger. When those companies bear the primary responsibility for enforcing standards, they become empowered to put their own interests first. In a sense, legal measures delegating anti-hate speech enforcement to companies puts, say, a social network's terms of service above the law, argues Kate Coyer, a fellow at Harvard University’s Berkman Center for Internet and Society. “The high fines for not taking down illegal content quickly enough give companies an incentive to rather remove content than not—since there are no fines if they wrongly determine to remove content,” Heumann adds. In other words, such laws may wind up encouraging a kind of default corporate censorship.
Relying solely on government action carries its own problems. One month after the Charlie Hebdo attacks in France, the French government claimed the power to block websites without a court order if it determined they promoted terrorism. Due process was denied, and the police became content regulators, Coyer says. "These laws also have a domino effect of allowing regimes like Russia to justify their own troubling laws."
So how do you balance shared public values with individual rights online? The EU's code of conduct offers one rudimentary example. But critics contend the process lacked transparency and gave US tech companies too much sway.
If the rise of the internet has taught the world anything, it's that users, just like citizens, want to have a say. These platforms are still maturing along with the public’s notions of what obligations these platforms should have. Gillespie argues that platforms ultimately need to adopt a more democratic understanding of themselves. "We have to begin thinking about their responsibility to more substantively engage with users on what platforms are and how they should be governed, to involve users more in setting the terms for how these public venues work," he says. The history of television offers a model, or in fact several, from the FCC to the BBC.
In the end, Coyer says, internet companies actually have a financial interest in giving users more options. "The alternative is more regulation," she says. Companies may or may not warm to the task of getting more aggressive about hate speech. But you can always count on them to not want the government to tell them what to do.