Facebook, Twitter, Microsoft, YouTube launch shared terrorist media database

Terrorist propaganda content will be assigned a digital fingerprint, which is then used to identify it across all four services
An Islamic State flag and banner near the city of Kirkuk, IraqMARWAN IBRAHIM/AFP/Getty Images

Facebook, Microsoft, Twitter and YouTube have launched a shared database of terrorist material uncovered on their networks, in a bid to speed up the removal of harmful content.

The joint statement comes one day after all four companies were publicly criticised by EU Justice Commissioner Vera Jourova for failing to adhere to a voluntary code of conduct signed in May, which stipulated they review and disable most “valid” notifications of hate speech within 24 hours.

The new statement appears to go one step further than the code of conduct, and is a public pledge for the companies to take down what it calls “the most extreme and egregious terrorist images and videos” in a more efficient manner. The statement does, however, maintain that the companies take "swift action" when "alerted" to content - so although the database itself is a proactive step, the approaches the tech giants use to tackle the problem in a largely passive way, have not changed.

How will the database work?

From today, the four tech giants will each contribute to a shared database populated with media that has been assigned a hash, something akin to a digital fingerprint. This media will include “violent terrorist imagery or terrorist recruitment videos and images we have removed from our services”. The technique is already widely used by Google and others to categorise child abuse content online. Once a piece of media has been assigned this unique marker, that marker can be used to trawl through a network and seek out the matching piece of content in case it has been replicated elsewhere. A global database for child abuse content has been setup and Project Vic coordinates how that material is shared to help law enforcement across the globe.

Read more: AI is being used to hunt out child porn and sexual abuse images across the web

The tech giants' statement makes it clear that none of the companies intend on either blindly deleting content once it is flagged, or immediately handing it over to governments, however. Each company will continue to use its own systems for identifying content and categorising it, referring to its own terms of service. It will then choose what content to share in the database: “content most likely to violate all of our respective companies’ content policies”.

“No personally identifiable information will be shared, and matching content will not be automatically removed. Each company will continue to apply its own policies and definitions of terrorist content when deciding whether to remove content when a match to a shared hash is found. And each company will continue to apply its practice of transparency and review for any government requests, as well as retain its own appeal process for removal decisions and grievances.”

Essentially, the only thing that will change, is that the most serious terrorist content which is unquestionably destined for removal, and probably the government, will be more easily identified across all four services once it is identified across a single service. It’s likely, though, that this type of material would be the most easily identifiable anyway, as it will be the most obvious choice for users to flag up if they come across it.

Does the database go far enough?

Only focusing on “content most likely to violate all of our respective companies’ content policies”, is the smart, least problematic way to go. But the companies have proven in the past to differ on what they consider acceptable material, and have refused to remove content which governments have argued falls under the definition of illegal terrorist content.

Facebook famously allowed videos of beheadings to be shared on its service, but not breastfeeding photos. Earlier this year, it was revealed in court proceedings that both Twitter and YouTube refused requests from the British authorities to remove Anjem Choudary’s online posts after he was arrested for supporting ISIS.

All four companies tread a fine line when filtering content, and seem committed to following the letter of the law while allowing content that, although unpleasant, may not be illegal. Back in 2008 when Google refused a US senator’s plea to remove terrorist videos, Eric Schmidt explained it thus: “While we respect and understand his views, YouTube encourages free speech and defends everyone's right to express unpopular points of view. We believe YouTube is a richer and more relevant platform for users precisely because it hosts a diverse range of views, and rather than stifle debate, we allow our users to view all acceptable content and make up their own minds.”

The latest joint statement from Facebook, YouTube, Twitter and Microsoft, was careful to emphasise that the companies will still prioritise user privacy, and transparency around its own work. There are also plans to expand the database to “involve additional companies in the future”.

It’s well known social networks have been used to share terrorist propaganda, hate speech and even terrorist recruiting materials. Most recently, a Commons home affairs select committee claimed social networks were "consciously failing" to prevent their services being used as recruitment tools. The latest statement seems a reaction to this mounting pressure, particularly from within Europe.

The report concluded: "These companies are hiding behind their supranational legal status to pass the parcel of responsibility and refusing to act responsibly in case they damage their brands," the report said. If they continue to fail to tackle this issue and allow their platforms to become the 'Wild West' of the internet, then it will erode their reputation as responsible operators."

This article was originally published by WIRED UK