Germany threatens Facebook with new laws forcing it to tackle hate speech and fake news

The ruling party may legislate to force internet giants to take down hate speech within 24 hours, or face fines of up to €500,000
The Reichstag: Chancellor Angela Merkel's ruling party last week announced it may write new laws forcing Facebook, Twitter and others to remove hate speech within 24 hours of it being flagged by a victim, or face fines of up to €500,000Florian Gaertner / Photothek / Getty Images

Germany may force social networks to pay hefty fines if they fail to swiftly tackle hate speech online.

A proposed set of new measures it is considering enforcing was announced by Chancellor Angela Merkel’s ruling party last week, and includes forcing Facebook, Twitter and others to remove hate speech within 24 hours of it being flagged by a victim, publish figures on the number of complaints it receives, and employ legally trained individuals to preside over the decisions to delete content. The internet giants could face fines of up to €500,000 for failing to remove an item quickly enough.

“After years of discussions, the social media must now, unfortunately, be forced to take responsibility,” the Wall Street Journal reports parliamentary floor leader Volker Kauder as saying. “Only in this way we can counter further brutalisation and willful manipulation of political debate in the net.” He went on to argue that victims should be able to find out who created offensive posts, while a member of the Social Democrats coalition party told Der Spiegel corrections should be published that enjoy the same ‘air time’ as the original post.

The ruling Christian Democrats party plans to begin drawing up legislation in 2017.

The sudden urgency to act has been instigated by a number of factors. The furore over the proliferation of fake news leading up to the US presidential election, and the suggestion that this could have impacted the outcome, is at the forefront of the party’s mind ahead of Germany’s federal elections next year. Following the CIA’s announcement that it believes Russia meddled to some extent in propaganda circulating pre-election, there are fears the same may be true of upcoming elections in France and Germany.

Add to this the fact Germany was effectively already given the green light to begin preparing new measures, after EU Justice Commissioner Vera Jourova warned earlier this month that similar laws may soon be written to tackle the failings of US tech giants in this area. Her words followed a report that revealed Facebook, Microsoft, Twitter and YouTube were failing to live up to the terms of a voluntary ‘Code of Conduct on Countering Illegal Hate Speech Online’, which was itself created in response to an increase of racist content proliferating across the web. That code of conduct avoided the need for new legislation, and was designed to compel the companies to review most “valid notifications for removal of illegal hate speech” in under 24 hours, removing or disabling said content if necessary.

According to Jourova, who commissioned a report on compliance with the code of conduct, this is not happening. “They only reviewed 40 per cent of the recorded cases in less than 24 hours,” an official commenting on the report told Reuters in early December. “After 48 hours, the figure is more than 80 per cent. This shows the target can realistically be achieved, but this will need much stronger efforts by the IT companies."

The internet giants were warned they would have to “act quickly and make a strong effort in the coming months" to avoid new legislation, and Germany appears to be preparing to be let down.

For its part, Facebook has taken a series of new measures to help counter fake news and hate speech in the past few weeks. It is hiring a new head of news and hasreleased new tools designed to tackle the problem. The latter helps users’ make complaints more easily and brings in third-party fact-checkers. The tool will be shared with fact-checkers at ABC News, FactCheck.org, the Associated Press, Snopes and Politifact. "We believe in giving people a voice and that we cannot become arbiters of truth ourselves, so we’re approaching this problem carefully," said Adam Mosseri, Facebook’s vice president of the news feed, in a blog post. "We’ve focused our efforts on the worst of the worst, on the clear hoaxes spread by spammers for their own gain, and on engaging both our community and third party organisations."

This article was originally published by WIRED UK