Skip to content

Meta’s human moderators turned keep Facebook and Instagram palatable

Photo: Dan Kitwood
Photo: Dan Kitwood (Getty Images)

For years Meta, which owns Facebook and Instagram, has faced waves of criticism from all sides of the political aisle for its content moderation decisions. Meta often responds to these criticisms publicly by telling critics it relies on seemingly objective and politically neutral AI systems to detect and remove harmful content. That description, while likely to become more true in the coming years, downplays the company’s continued reliance on an army of contracted human content moderators spread out around the world. Those workers are regularly exposed to the darkest corners of humanity and view videos and images depicting brutal killings, self harm, and mutilation, all for fractions of what full-time Facebook engineers earn.

Previous reports documenting the lives of Facebook moderators in Arizona felt compelled to turn to sex and drugs to cope with the stress of the content they were viewing. In another case, some of the content moderators reportedly began to believe some of the outlandish conspiracy theories they were hired to suss out. Meta paid traumatized workers $52 million as a part of a settlement in 2020 and promised workplace improvements following the reports, but workers say little has changed on the ground.