The Future Is Here
We may earn a commission from links on this page

Facebook Supreme Court Says Posts About Abortion Are Not Death Threats

Meta's Oversight Board rejected the initial removal of three heated posts about abortion laws, which the company called "death threats."

We may earn a commission from links on this page.
Image for article titled Facebook Supreme Court Says Posts About Abortion Are Not Death Threats
Photo: Stephen Brashear (AP)

In March 2022, Meta removed a pair of posts on Facebook and Instagram criticizing a proposed South Carolina bill that would have horrifyingly applied homicide penalties to abortion seekers. One of the users, a supporter of abortion access, voiced their frustration on Instagram, describing the lawmakers in question as “so pro-life we’ll kill you dead if you get an abortion.” That post, and one similar to it on Facebook, was removed by Meta for violating its policies prohibiting death threats.

Around that same time, Meta removed another abortion-related post coming from a starkly different perspective. In that case, a Facebook user uploaded a photo of a pair of outstretched hands alongside the caption: “Pro-Abortion Logic,” before going on to mock abortion advocates. “We don’t want you to be poor, starved, or unwanted,” the post read. “So we’ll just kill you instead.” A caption reading “Psychopaths…” followed below.


All three of those posts gained the attention of The Oversight Board, Meta’s Supreme Court-like entity responsible for weighing in on the company’s thorniest content moderation issues. In an 18-page ruling issued today and shared with Gizmodo, the Oversight Board overturned Meta’s original decision to remove all three posts. Going a step further, however, the Board called on Meta to publish the data it used to evaluate the enforcement accuracy of its Violence and Incitement policies. The Board wants that data, they say, in order to determine whether these mistakenly removed posts were outliers, as Meta argues, or if they are evidence of a larger, more persistent trend of over-enforcement of political speech on the company’s social networks. For now, the Board seems unconvinced by Meta’s argument.

“Meta has not provided the Board with sufficient assurance that the errors in these cases are outliers, rather than being representative of a systematic pattern of inaccuracies,” The Board said.


In its ruling, the Oversight Board said debates about abortion, particularly following the reversal of Roe V. Wade last summer, have become more charged and can involve threats that are clearly prohibited under Meta’s policies. Those high stakes make ensuring clarity around what counts as violating the rules all the more important. Repeated mistakes and biases made by Meta’s automated enforcement policies, the Board said, can lead to “cyclical patterns of censorship.” Mistakenly removing abortion-related content that doesn’t actually violate Meta’s policies, the Board added, threatens to disrupt political debate by silencing voices.

“These cases raise concerns about the accuracy of how Meta is enforcing its Violence and Incitement policy and whether this is disproportionately impacting abortion debates and political expression,” an Oversight Board spokesperson told Gizmodo. Meta has to ensure its systems can reliably distinguish between threats and rhetorical use of violent language.”

Board members said Meta told them that distinguishing between literal and non-literal use of violent language is challenging because “it requires consideration of multiple factors like the user’s intent, market-specific language nuances, sarcasm, and humor.” In a statement posted to its Transparency Center following the ruling, Meta said it “welcomed” the Board’s decision and would implement it but did not comment on whether or not it would share the amount of data requested in the Board’s recommendation.

The Board’s ruling and request for more data are clearly meant to have implications beyond these three posts. The Oversight Board says it picked the posts because they highlight the difficult content moderation challenge of gauging violent rhetoric when used as a figure of speech. That heated language is particularly pronounced when it comes to fights over abortion but could easily extend out to other high-stakes, politically divisive speech as well.


Why were the abortion posts removed?

Each of the posts under review were initially taken down by Meta’s automated screening services which scan for signs of “hostile speech.” The posts triggered one of the system’s automated hostile speech classifiers and were then sent to human moderators for review. After reviewing the posts, the moderators confirmed the automated system’s decision and said they did, in fact, violate the company’s violence and incitement policies, specifically those prohibiting death threats. Meta eventually reversed those decisions but only after the Board announced it was considering the users’ appeals.


Users appealed Meta’s decision in all three cases, citing a variety of reasons justifying their aggressive language. In their appeal, the pro-choice Facebook user argued they weren’t making a threat, but rather were highlighting the “flawed logic” of groups supporting abortion access. The abortion rights supporter on Facebook, by contrast, argued it was common for Meta to miss crucial context on abortion-related posts. Some users, they said, opted to use the words “de-life” or “unalive” to try and skirt past the company’s automated detection systems. Meta did not immediately respond to Gizmodo’s request for comment.

A second pair of human moderators upheld the decision in the Facebook Group and Instagram news posts but disputed the ruling in the Facebook news link case. Meta called in a third human reviewer in that case who, once again, said that pos did violate Meta’s rules. Six out of the seven human moderators involved in this process, The Board notes, ultimately got the decision wrong. None of the moderators were located in the United States. Meta told the Board it could not provide any details about why the six moderators decided the way they did because they don’t require its reviewers to document reasons for their decisions.


How to moderate ‘Kill’ when it’s a figure of speech

The abortion posts touch on a common theme present in several other high-profile Oversight Board cases where supposedly violent words are used in a rhetorical way that doesn’t necessarily incite violence. The clearest case of this involves the Board’s recent decision to overturn Meta’s removal of a 2022 Facebook post with text that translated to “death to Khamenei,” in reference to Iran’s Supreme Leader. In that case, the Board said the decision to remove the content relied on a literal reading of the word death and failed to recognize that the exact same slogan is often deployed as a form of political expression rather than a call for violence. The Oversight Board similarly overturned Meta’s decision to remove a poem comparing the Russian military to Nazis which included a line reading “kill the fascist.”


“The Board is concerned that the rhetorical use of violent words may be linked to disproportionately high rates of errors by human moderators,” the Board said.

Update: 8:39 A.M. EST: Added statements from Meta and The Oversight Board