The UK has been subject to three high-profile terror attacks in 2017. Across the Westminster, Manchester, and London Bridge attacks, 35 people lost their lives. In the wake of all the incidents, UK government ministers have accused social networks of not doing enough to combat extremist material.
Read more: Encryption explained: how apps and sites keep your private data safe (and why that's important)
Prime minister Theresa May has announced tech companies who don't deal with terrorism related material may be fined under a new law she is planning. In response, Facebook, Google, and Twitter have hit back at suggestions they have not been doing enough.
"In the wake of recent terror attacks, people have questioned the role of tech companies in fighting terrorism online," a blog post on Facebook by Monika Bickert, the director of global policy management and Brian Fishman, counterterrorism policy manager, starts.
Within the 1,700 word post, the two Facebookers reveal how the firm is using artificial intelligence in its attempts to curb extremist material posted to its groups, pages, and messages. AI is being used within five different areas, Bickert and Fishman write.
These include image matching. "When someone tries to upload a terrorist photo or video, our systems look for whether the image matches a known terrorism photo or video," the Facebook employees write. When a known terrorism image it matched, it is blocked from being uploaded. A similar matching system is used for child abuse images.
It also claims it is using AI to analyse text posts and determine whether they are posting messages of support for ISIS, Al Qaeda and other related groups. Machine learning is being used to detect similar posts to those Facebook has already determined to be in favour of the groups' activities.
"We also use algorithms to “fan out” to try to identify related material that may also support terrorism," Bickert and Fishman say. "We use signals like whether an account is friends with a high number of accounts that have been disabled for terrorism, or whether an account shares the same attributes as a disabled account". Facebook is also using AI techniques to identify accounts being created by people who have already had their's disabled.
Despite Mark Zuckerberg's company revealing how it is using AI to work against terrorism-related content, the firm hasn't revealed numbers about how many accounts it has suspended. This is something Twitter publicises. A Twitter spokesperson says between July and December 2016, it banned 376,890 accounts for the promotion of terrorism.
Read more: Dear Amber Rudd, don't use the London terror attacks to break encryption
Zuckerberg's firm has also not detailed what is considers to be extreme content. It has previously had to make u-turns on decisions where it has banned pictures and videos it deemed to be controversial.
Facebook says these approaches are combined with 'human expertise' – where staff review reported content and work with law enforcement agencies around the world when an incident happens. Facebook also states it works with other tech companies and governments.
The company also says it is working on "systems" that will allow it to monitor content across other Facebook apps (the company owns WhatsApp and Instagram). WhatsApp, in particular, has come under attack from UK politicians as end-to-end encryption means it isn't possible for Facebook, or anyone other than the intended recipients, to intercept the content of sent messages.
"There should be no place for terrorists to hide," Home Secretary Amber Rudd said following the Westminster Bridge attack. "We need to make sure that organisations like WhatsApp, and there are plenty of others like that, don’t provide a secret place for terrorists to communicate with each other".
This article was originally published by WIRED UK