Facebook, Twitter, Microsoft and Google-owned YouTube have agreed to a new European Commission code of conduct stating they must stamp out hate speech.
The sites will now remove hate speech within 24-hours of it being posted under the guidelines, or face the wrath of Brussels.
Far right organisations and terrorist groups such as ISIS have been increasingly using social media to spread propaganda and recruit followers, and while some platforms have been slow to react, most are now taking steps to stamp out extremist hate speech.
Twitter already claims to have shut down tens of thousands of accounts linked to ISIS since 2015.
By signing up to the code of conduct, the tech firms have committed to put procedures in place where content flagged as illegal hate speech is reviewed and, if necessary, removed within a one-day period.
"The recent terror attacks have reminded us of the urgent need to address illegal online hate speech. Social media is unfortunately one of the tools that terrorist groups use to radicalise young people and racist use to spread violence and hatred," said Vĕra Jourová, EU commissioner for justice, consumers and gender equality.
"This agreement is an important step forward to ensure the internet remains a place of free and democratic expression, where European values and laws are respected. I welcome the commitment of worldwide IT companies to review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary."
Critics have argued policing the content of social media sites is little more than censorship, especially as the platforms are known for their liberal attitudes towards freedom of speech.
"We value civility and free expression," said John Frank, vice president EU government affairs at Microsoft.
It is unclear how the new guidelines will affect the tech companies' platforms in the UK if the 'Leave' vote is successful in the EU referendum on June 23.
This article was originally published by WIRED UK