There are mere days left in 2018, but Facebook’s eternal year of reckoning continues.
A new report from the New York Times has pulled back the curtain on part of Facebook’s internal struggle to get its hands around the complex problems housed on its platform, not the least of which include disinformation and hate. The report comes as part of a monthslong investigation by Times’ Max Fisher, who obtained a massive stock of documents intended to guide thousands of moderators on the platform whose job it is to manage potentially problematic content. According to the Times, Facebook’s so-described rulebooks contain “numerous gaps, biases and outright errors.”
The Times was reportedly provided the documents—some of which were previously reported by Motherboard—by “an employee who said he feared that the company was exercising too much power, with too little oversight—and making too many mistakes.” The report paints a portrait of haphazardly assembled rulebooks comprising loose spreadsheets and PowerPoints of rules and stipulations by which moderators are tasked with policing content. The documents, the Times says, can be confusing when taken as a whole:
One document sets out several rules just to determine when a word like “martyr” or “jihad” indicates pro-terrorism speech. Another describes when discussion of a barred group should be forbidden. Words like “brother” or “comrade” probably cross the line. So do any of a dozen emojis.
The guidelines for identifying hate speech, a problem that has bedeviled Facebook, run to 200 jargon-filled, head-spinning pages. Moderators must sort a post into one of three “tiers” of severity. They must bear in mind lists like the six “designated dehumanizing comparisons,” among them comparing Jews to rats.
The Times reported that while the rulebooks’ architects consult with outside groups, they “are largely free to set policy however they wish.” The teams responsible for assembling the rulebooks are “mostly young engineers and lawyers” who attempt “to distill highly complex issues into simple yes-or-no rules,” The Times said. That undertaking reportedly proves difficult for moderators, some of whom the Times says rely on Google Translate and have “mere seconds to recall countless rules” while combing through up to a thousand posts daily.
A spokesperson for Facebook referred to the rules in a statement as “training materials” that “aren’t meant to serve as a proxy for Facebook policy; they are meant to train our reviewers and give them specifics, including uncommon or unusual cases that they may encounter when reviewing content on Facebook.”
“[W]e have about 15,000 content reviewers located around the world,” a spokesperson for Facebook told Gizmodo by email. “We prioritize accuracy when it comes to content review, which is why we hire native language speakers and have built out our teams so that we can review content in more than 50 languages. For the same reason, we don’t have quotas for the amount of content reviewers have to get through in a day, or the amount of time it may take to make a decision about a piece of content.”
But more troubling than Facebook’s arbitrary collection of rules intended to police its billion-plus users—posts by whom can run the gamut of tasteless memes to calculating and potentially dangerous political propaganda—is the significant political power it wields. In deciding who is allowed a platform on Facebook’s site, the report illustrates that can be incredibly tricky.
One example cited by the Times was a deeply racist ad from the Trump campaign essentially designed to incite fear about a migrant caravan of Central American asylum seekers. That ad was later banned on Facebook just last month. Facebook also came under fire after its platform was used as a political tool by President of the Philippines Rodrigo Duterte. In Myanmar, Facebook was used to fuel violence against Muslims for years, which the Times said occurred in part because of a “paperwork error” in its rulebooks that instructed allowing posts that should have in fact been removed.
“We’re constantly iterating on our policies to make sure that they work for people around the world who use Facebook, and we publish these updates publicly every month,” a Facebook spokesperson said by email. “There’s a number of different reasons we might revisit an existing policy or draft a new one—issues in enforcement accuracy, new trends raised by reviewers, internal discussion, expert critique, external engagements, and evolving local circumstances, to name a few. In arriving at a policy recommendation, we get input from external experts because we know that our policies are strongest when they’re informed by broad outreach to affected people and communities.”
Much of the Times report fills in the blanks about procedures at Facebook that have long failed to manage the problems on its platform. But it also illustrates the extent to which Facebook is struggling to handle the issues that continue to arise as it attempts to comply with the demands of respective governments.
Try as it may to manage its own product, Facebook has a Facebook-sized problem that likely isn’t going away anytime soon.
Updated 12/28/18 2:15 p.m. ET: Updated to reflect statements from Facebook.