After a rough week of criticism over Facebook CEO Mark Zuckerberg’s shoddy explanation for why he won’t ban conspiracy site Infowars—including a very awkward tangent into apparently believing Holocaust deniers are not “intentionally getting it wrong”—the social media giant has announced it will begin removing misinformation that provokes real-world violence.
Per the New York Times, the new policy is “largely a response to episodes in Sri Lanka, Myanmar and India” where rumors spread rapidly on Facebook, leading to targeted attacks on ethnic minorities. The paper writes that Facebook staff admit they bear “responsibility” to curb that kind of content from circulating on the site:
“We have identified that there is a type of misinformation that is shared in certain countries that can incite underlying tensions and lead to physical harm offline,” said Tessa Lyons, a Facebook product manager. “We have a broader responsibility to not just reduce that type of content but remove it.”
In another statement to CNBC, a Facebook spokesperson characterized the policy as a crackdown on a specific type of content they have deemed worthy of removal, while defending their laissez-faire approach to other dubious posts:
“Reducing the distribution of misinformation—rather than removing it outright—strikes the right balance between free expression and a safe and authentic community,” a Facebook spokesperson said in a statement to CNBC. “There are certain forms of misinformation that have contributed to physical harm, and we are making a policy change which will enable us to take that type of content down. We will be begin implementing the policy during the coming months.”
According to CNBC, Facebook says the new policy will partnering with local civil-society groups to identify text and image content with the purpose of “contributing to or exacerbating violence or physical harm” for removal. The CNBC report also notes that the process will involve Facebook’s internal image recognition technologies, presumably a similar system to the one it uses to automatically purge revenge porn from the site.
That’s an improvement on the current situation. For example, in Sri Lanka and Myanmar, the company has faced harsh criticism from local NGOs for alleged inaction as incitement and propaganda circulated widely on the site. In both countries, despite having large userbases, reports indicate Facebook largely failed to hire enough moderation staff. Partnering with local organizations could help the site become less of an absentee landlord.
However, this is likely far from a slam dunk. For one, Facebook’s standards for what qualifies as inappropriate content are habitually lax, and there will be a lot of said content to sift through. It often relies on automated methods that are easily worked around, or others that simply end up backfiring (as was the case with the “disputed” flags it put on dubious articles). In this case, it’s easy to imagine this ending up being an unending game of Whac-A-Mole in which they only commit the resources to stare at one hole.
As the NYT wrote, there are two other solutions the site is moving forward with: downranking posts flagged as false by its third-party fact checkers and adding “information boxes under demonstrably false news stories, suggesting other sources of information for people to read.” While either method will likely have some impact, Facebook’s fact checkers have repeatedly expressed concerns that the site’s system is too constrained to be effective.
Additionally, the NYT reported Facebook has no plans to roll out the new rules to its subsidiary, the encrypted chat service WhatsApp, which has been linked to several deadly hoaxes—though Instagram is included:
The new rules apply to one of Facebook’s other big social media properties, Instagram, but not to WhatsApp, where false news has also circulated. In India, for example, false rumors spread through WhatsApp about child kidnappers have led to mob violence.
Policing WhatsApp may be somewhat more difficult or outright impossible, as Facebook ostensibly cannot see the content of the messages without watering down its encryption, so it’s between a rock and a hard place there. (As Indian daily Economic Times wrote last year, authorities there still consider WhatsApp group administrators liable for the content of chats.)
Then there’s the matter of Facebook’s stated commitment to free speech, which is nice in theory but vague enough in practice that it seems to function primarily as a shield against criticism. Linked to this is the site’s habitual wariness, linked in part to a 2016 Gizmodo post alleging bias in its now-defunct trending news section, of offending conservative and far-right groups eager to cry censorship. For example, take Infowars, which spread conspiracy theories about a DC-area pizza restaurant until a gunman showed up. As the Washington Post noted, it is hard to reconcile how Facebook’s “beefed-up approach” to misinformation can coexist with some of its main purveyors being allowed to remain on the site.
These problems are innumerable. As former Gawker editor in chief Max Read recently wrote, they are also perhaps unsolvable short of a radical restructuring of the company, given Facebook’s scale is now so big it approaches a form of “sovereign power without accountability” (or indeed, a coherent vision of what it is supposed to be).
“There is not always a really clear line,” Facebook product manager Tessa Lyons told the NYT. “All of this is challenging—that is why we are iterating. That is why we are taking serious feedback.”
Correction: A prior version of this article cited the New York Times as reporting that Instagram was not included in the new rules. According to a Facebook spokesperson, these new policies will extend to Instagram, and implementation for WhatApp is being examined. The Times has also since issued a correction, and we’ve swapped out their original passage for the updated one. We regret the error.