The Future Is Here
We may earn a commission from links on this page

Facebook Is Failing to Remove Brutal Death Threats Targeting Election Workers

An investigation found Facebook accepted 15 out of 20 ads including death threats towards election workers. TikTok and YouTube suspended those accounts.

We may earn a commission from links on this page.
Image for article titled Facebook Is Failing to Remove Brutal Death Threats Targeting Election Workers
Photo: Jessica McGowan (Getty Images)

Meta, despite repeatedly committing to ramping up security policies ahead of the 2022 midterms, appears to fare far worse than its competing social media services in detecting and removing death threats targeting election workers.

Those findings are part of a new investigation conducted by Global Witness and the NYU Cybersecurity for Democracy which claims Facebook approved 15 out of 20 advertisements on its platform containing brutal death threats levied against election workers. When researchers tried to run those very same ads on TikTok and YouTube, however, the platforms quickly suspended their accounts. The findings suggests Facebook takes a less strict approach to moderating violent political content than its peer companies despite executives recently providing assurances the platform would beef up security ahead of the 2022 midterm elections.


To run their experiment, the researchers found 10 real world examples of social media posts including death threats targeting election workers. Gizmodo reviewed copies of those ads, many of which alluded to election workers being hung or mass executed. One of the ads directed at the workers said, “I hope your children get molested.”

“All of the death threats were chillingly clear in their language; none were coded or difficult to interpret,” the researchers wrote.


Once they collected the ads, the researchers opted to remove profanity and grammatical errors. This was done to ensure the posts in question were being flagged for the death threats and not for explicit language. The ads were submitted, both in English and Spanish, a day before the midterm elections.

While it appears YouTube and TikTok moved quickly to suspend the researchers’ account, the same can’t be said for Facebook. Facebook reportedly approved nine of the ten English-based death threat posts and six out of ten Spanish posts. Even though those posts clearly violated Meta’s terms of service, the researchers’ accounts were shut down.

A Meta spokesperson pushed back on the investigation’s finding in an email to Gizmodo saying the posts the researchers used were “not representative of what people see on our platforms.” The spokesperson went on to applaud Meta for its efforts to address content that incites violence against election workers.

“Content that incites violence against election workers or anyone else has no place on our apps and recent reporting has made clear that Meta’s ability to deal with these issues effectively exceeds that of other platforms,” the spokesperson said “We remain committed to continuing to improve our systems.”


The specific mechanisms underpinning how content makes its ways onto viewers screens varies from platform to platform. Though Facebook did approve the death threat ads, it’s possible the content could have still been caught by another detection method at some point, either before it published or after it went live. Still, the researchers’ findings point to a clear difference in Meta’s detection process for violent content as compared to YouTube or TikTok in this early stage of the content moderation process.

Election workers were exposed to a dizzying array of violent threats this midterm season, with many of those calls reportedly flowing downstream of former President Donald Trump’s refusal to concede the 2020 election. The FBI, the Department of Homeland Security, and The Office of U.S. Attorneys, all released statements in recent months acknowledging increasing threats levied against election workers. In June, the DHS issued a public warning that “calls for violence by domestic violent extremists,” directed at election workers, “will likely increase.”


Meta, for its part, claims it has increased its responsiveness to potentially harmful midterm content. Over the summer, Nick Clegg, the company’s President of Global Affairs, published a blog saying the company had hundreds of staff spread out across 40 teams focused specifically on the midterms. At the time, Meta said it would prohibit ads on its platforms encouraging people not to vote or posts calling into question the legitimacy of the elections.

The Global Witness and NYU researchers want to see Meta take additional steps. They called on the company to increase election-related content moderation capabilities, include full details of all ads, allow more independent third-party auditing and publish information outlining steps they’ve taken to ensure election safety.


“The fact that YouTube and TikTok managed to detect the death threats and suspend our account whereas Facebook permitted the majority of the ads to be published shows that what we are asking is technically possible,” the researchers wrote.