Tech Giants Quietly Automating Process To Remove Extremist Content From Sites

Illustration for article titled Tech Giants Quietly Automating Process To Remove Extremist Content From Sites

The technology that was designed to remove videos from the Internet with copyrighted content could be used for good, according to a report from Reuters. That use? To fight extremism.


Two sources close to the proceedings at some of the largest technology companies, including YouTube and Facebook, said automation was quietly being used to remove extremist content from their sites, including Islamic State videos. Videos could be checked against a database of previously-banned videos to look for reposted content.

It wasn’t clear how much of this process would be automated, but the sources claim that the same technology that finds copyrighted content could be used to find videos of decapitations and other violent scenes. YouTube, for example, uses Content IDs to check new uploads against a list of submitted claims by copyright holders.


This report comes on the heels of an announcement last week by the Counter Extremism Project (CEP) that could take this identification to a whole other level. Hashing software, which recognizes unique digital fingerprints and has been used to counter child pornography, was expanded upon to include video and audio files that could’ve been uploaded by extremist groups. The organization promises that with this technology, the process of identifying videos could be more automated and rely less on user reports. It wrote:

“Tech companies try to take down heinous content that violates their terms of service, but the process is manual and reactive, which hampers speed and effectiveness. CEP’s new technology will streamline and accelerate the process.”

It’s a start in what has been a long battle by software companies to counter extremist content. The Islamic State has been using social media such as Twitter to spread its message and recruit new members and this strategy could be used in conjunction with flagged post reports.

Governments have taken notice. A meeting announced in January put together top officials and Silicon Valley executives to discuss the threat. Reuters added that a private call held in April to discuss options.


The two sources couldn’t confirm details on how much automation is going into the current process. Neither company would confirm the information to Reuters.

This is just one of many ways that tech giants are experimenting with ways to combat terrorist propaganda online. A recent study proposed an algorithm that could predict attacks, while traffic spikes on a certain YouTube video (which has been removed) could also be used to predict when attacks would occur. Other less helpful strategies that have been discussed are ways to use “likes” to counter the Islamic State.



Weekend editor and night person at Gizmodo. More space core than human.

Share This Story

Get our newsletter


Professor Dog

Because we all know the automated copyright removals are 100% accurate don’t’ we....

I don’t buy this is going to be used for good. I do buy this will be used for the suppression of free speech though.