After facing multiple accusations of allowing terrorists a "safe space" online, the world's biggest tech companies are set to work together to fight extremism online.
Joining forces, Facebook, Microsoft, Twitter and YouTube have created a new group that will make the platforms "hostile" to terrorists and extremists. Dubbed the Global Internet Forum to Counter Terrorism, the organisation has three main aims: to develop 'tech solutions' to oppose terrorism online, research relevant issues, and share information as much as possible.
Read more: Facebook launches Online Civil Courage Initiative to tackle rising extremism in the UK
Outlined in a blog post, the Forum's work is said to "formalise and structure" how the companies work together. It will also see the group work with smaller tech companies.
The announcement follows four terror attacks in the UK and calls for companies to do more when they find extremists sharing material on their networks.
Prime minister Theresa May has called for the digital world to be regulated and fines issued to companies that don't deal with extremist material online. Germany has already started working on legislation to fine companies up to $55 million (£43m) if they don't quickly remove hate speech once it has been reported to them.
In its blog post, the Forum says it will work on the tech problem of terror-related content being shared online. This will be done through a hash database, which was previously announced in December 2016. The scheme sees images and videos of known terrorist content given a unique identifier (a hash) that can be automatically searched for and recognised when material is uploaded to the internet. By sharing hashes, the companies can easily spot material that has already been identified as terrorist in nature. A similar system is used by the Internet Watch Foundation for child abuse images.
The tech giants will also share best practices for how they use machine learning to identify images and create a standard for reporting how many pieces of terrorist material they remove from their services. "We will commission research to inform our counter-speech efforts and guide future technical and policy decisions around the removal of terrorist content," the group says. It will work closely with governments, non-governmental organisations, and groups looking at online extremism.
Read more: Google's using a combination of AI and humans to remove extremist videos from YouTube
The companies joining together is rare but not unprecedented. Tech's biggest firms have previously teamed-up to research the dangers of artificial intelligence, chip technology and more.
The Global Internet Forum does, however, add to the work each of the technology firms is doing on their own. On June 23, Facebook launched its Online Civil Courage Initiative in the UK. The partnership with the Institute for Strategic Dialogue was created to offer "financial and marketing support" to groups in the country working against online extremism. It will include the ability to launch 'counter-speech' campaigns and the groups participating will get free advertising space on Facebook.
The Initiative from Facebook follows the company revealing how its artificial intelligence and machine learning is being used to spot terrorist material and accounts. Mark Zuckerberg's social network has said it is using an image matching system and analysing text posts to find extremist content.
YouTube, through its parent company Google, has also publicly been talking about how it is combining machine learning with human staff to spot and remove videos of extreme content. "We have used video analysis models to find and assess more than 50 per cent of the terrorism-related content we have removed over the past six months," Kent Walker, Google's general counsel explained.
This article was originally published by WIRED UK