Facebook's Reported 'Rulebooks' Show It's Got a Serious Content Moderation Problem

Illustration for article titled Facebooks Reported Rulebooks Show Its Got a Serious Content Moderation Problem
Photo: Alexander Koerner (Getty)

There are mere days left in 2018, but Facebook’s eternal year of reckoning continues.

Advertisement

A new report from the New York Times has pulled back the curtain on part of Facebook’s internal struggle to get its hands around the complex problems housed on its platform, not the least of which include disinformation and hate. The report comes as part of a monthslong investigation by Times’ Max Fisher, who obtained a massive stock of documents intended to guide thousands of moderators on the platform whose job it is to manage potentially problematic content. According to the Times, Facebook’s so-described rulebooks contain “numerous gaps, biases and outright errors.”

The Times was reportedly provided the documents—some of which were previously reported by Motherboard—by “an employee who said he feared that the company was exercising too much power, with too little oversight—and making too many mistakes.” The report paints a portrait of haphazardly assembled rulebooks comprising loose spreadsheets and PowerPoints of rules and stipulations by which moderators are tasked with policing content. The documents, the Times says, can be confusing when taken as a whole:

One document sets out several rules just to determine when a word like “martyr” or “jihad” indicates pro-terrorism speech. Another describes when discussion of a barred group should be forbidden. Words like “brother” or “comrade” probably cross the line. So do any of a dozen emojis.

The guidelines for identifying hate speech, a problem that has bedeviled Facebook, run to 200 jargon-filled, head-spinning pages. Moderators must sort a post into one of three “tiers” of severity. They must bear in mind lists like the six “designated dehumanizing comparisons,” among them comparing Jews to rats.

Advertisement

The Times reported that while the rulebooks’ architects consult with outside groups, they “are largely free to set policy however they wish.” The teams responsible for assembling the rulebooks are “mostly young engineers and lawyers” who attempt “to distill highly complex issues into simple yes-or-no rules,” The Times said. That undertaking reportedly proves difficult for moderators, some of whom the Times says rely on Google Translate and have “mere seconds to recall countless rules” while combing through up to a thousand posts daily.

A spokesperson for Facebook referred to the rules in a statement as “training materials” that “aren’t meant to serve as a proxy for Facebook policy; they are meant to train our reviewers and give them specifics, including uncommon or unusual cases that they may encounter when reviewing content on Facebook.”

“[W]e have about 15,000 content reviewers located around the world,” a spokesperson for Facebook told Gizmodo by email. “We prioritize accuracy when it comes to content review, which is why we hire native language speakers and have built out our teams so that we can review content in more than 50 languages. For the same reason, we don’t have quotas for the amount of content reviewers have to get through in a day, or the amount of time it may take to make a decision about a piece of content.”

But more troubling than Facebook’s arbitrary collection of rules intended to police its billion-plus users—posts by whom can run the gamut of tasteless memes to calculating and potentially dangerous political propaganda—is the significant political power it wields. In deciding who is allowed a platform on Facebook’s site, the report illustrates that can be incredibly tricky.

Advertisement

One example cited by the Times was a deeply racist ad from the Trump campaign essentially designed to incite fear about a migrant caravan of Central American asylum seekers. That ad was later banned on Facebook just last month. Facebook also came under fire after its platform was used as a political tool by President of the Philippines Rodrigo Duterte. In Myanmar, Facebook was used to fuel violence against Muslims for years, which the Times said occurred in part because of a “paperwork error” in its rulebooks that instructed allowing posts that should have in fact been removed.

“We’re constantly iterating on our policies to make sure that they work for people around the world who use Facebook, and we publish these updates publicly every month,” a Facebook spokesperson said by email. “There’s a number of different reasons we might revisit an existing policy or draft a new one—issues in enforcement accuracy, new trends raised by reviewers, internal discussion, expert critique, external engagements, and evolving local circumstances, to name a few. In arriving at a policy recommendation, we get input from external experts because we know that our policies are strongest when they’re informed by broad outreach to affected people and communities.”

Advertisement

Much of the Times report fills in the blanks about procedures at Facebook that have long failed to manage the problems on its platform. But it also illustrates the extent to which Facebook is struggling to handle the issues that continue to arise as it attempts to comply with the demands of respective governments.

Try as it may to manage its own product, Facebook has a Facebook-sized problem that likely isn’t going away anytime soon.

Advertisement

Updated 12/27/18 9:15 p.m. ET: Updated to reflect that some documents reported by the New York Times on Thursday were previously reported by Motherboard.

Updated 12/28/18 2:15 p.m. ET: Updated to reflect statements from Facebook.

[New York Times]

Advertisement

Share This Story

Get our newsletter

DISCUSSION

Your New Friend

I’m surprised, but I read the whole NYT article and found it poorly executed. They’re conflating three problems as if it were one:

-Poor implementation of poorly-written guidelines

-Facebook’s adherence to the laws of the countries it exists in, even if they are undemocratic

-Bad algorithms (thrown in all the way at the end)

The second one is the most interesting, and the most irresponsibly woven into the thread of the article. Based on the headline and stinger, it’s not what I thought I’d be reading about, and it’s not to do with any “secret rulebook.” The article talks about Facebook allowing extremist groups and behaviors in some countries but not others as if it’s Facebook’s decision, even as it acknowledges Facebook bans/allows those groups and behaviors in those countries because they’re following the laws and climates in those countries. It’s Facebook’s decision to *exist* in any given country and we can debate the morality of taking part in an organization that prioritizes profit over democratic practices (capitalism?), but it’s pretty naive to pretend Facebook would be Facebook if it applied liberal American ethics globally. It’s like complaining about an orange for not being beef - you’re not wrong? They can’t even manage it in America. They’re a very, very successful company and big beyond anything, but are we being dense by expecting them to act as anything else but a profit-machine? If they changed the way they behave in undemocratic environments, wouldn’t they cease to exist there and a platform that allows those in power to keep being in power would sprout up in its place? I recognize how cynical I’m being, but the authors of both articles seem to expect them to act as arbiters of justice and truth, which would be great, but where did we get the idea that they or any similar product/platform would do that?

Edited to add: after having a conversation with someone, they summed up what I’m trying to say really concisely: “can you say it’s Facebook’s fault for shitty people being shitty?”