'That Was Torture;' OpenAI Reportedly Relied on Low-Paid Kenyan Laborers to Sift Through Horrific Content to Make ChatGPT Palatable

The laborers reportedly looked through graphic accounts of child sexual abuse, murder, torture, suicide, and, incest.

We may earn a commission from links on this page.
Image for article titled 'That Was Torture;' OpenAI Reportedly Relied on Low-Paid Kenyan Laborers to Sift Through Horrific Content to Make ChatGPT Palatable
Image: Ascannio (Shutterstock)

The seemingly simple, shiny, clean world associated with tech is inevitably almost always simultaneously propped up by something darker hidden just below the surface. From mentally injured content moderators sifting through torrents of vile Facebook posts to overworked child laborers mining the cobalt necessary for luxury electric vehicles, frictionless efficiency comes at a human cost. A new report shows that’s equally true for generative AI standout OpenAI.

A new investigation from Time claims OpenAI, the upstart darling behind the powerful new generative AI chatbot ChatGPT, relied on outsourced Kenyan laborers, many paid under $2 per hour, to sift through some of the internet’s darkest corners in order to create an additional AI filter system that would be embedded in ChatGPT to scan it for signs of humanity’s worst horrors. That detector would then essentially filter ChatGPT, which has so far gained over 1 million users, palatable for mass audiences. Additionally, the detector would reportedly help remove toxic entries from the large datasets used to train ChatGPT.

While end-users received a polished, sanitary product, the Kenyan workers essential acted as a type of AI custodian, scanning through snippets of text reportedly depicting vivid accounts of child sexual abuse, murder, torture, suicide, and, incest, all in graphic detail.


OpenAI reportedly worked with a U.S. company called Sama, which which is better known for employing workers in ​​Kenya, Uganda and India to perform data labeling tasks on behalf of Silicon valley giants like Google and Meta. Sama was actually Meta’s largest content moderator in Africa prior to this month when the company announced they had ceased working together due to the “current economic climate.” Sama and Meta are currently the subject of a lawsuit by a former content moderator who alleges the companies violated the Kenyan constitution.

In OpenAI’s case, the Kenyan workers reportedly earned between $1.32 and $2 per hour for a company whose recent reports suggest could recover a cash injection from Microsoft of around $10 billion. If that happens, Semafor notes, OpenAI will be valued at $29 billion.


OpenAI did not immediately respond to Gizmodo’s request for comment.

Like some content moderators for other Silicon Valley giants, the Sama workers said their work often stayed with them after they logged off. One of those workers told Time he suffered from recurring visions after reading the description of a man having sex with a dog. “That was torture,” the worker said.


Overall, the teams of workers were reportedly tasked with reading and labeling around 150-250 passages of text in a nine hour shift. Though the workers were granted the ability to see wellness counselors, they nonetheless told Time they felt mentally scarred by the work. Sama disputed those figures, telling Time the workers were only expected to label 70 passages per shift.

Sama did not immediately respond to Gizmodo’s request for comment.

“Our mission is to ensure artificial general intelligence benefits all of humanity, and we work hard to build safe and useful AI systems that limit bias and harmful content,” OpenAI said in a statement sent to Time. “Classifying and filtering harmful [text and images] is a necessary step in minimizing the amount of violent and sexual content included in training data and creating tools that can detect harmful content.”


Sama, who had reportedly signed three contracts with OpenAI worth about $200,000, have recently decided to exit the harmful data labelling space entirely, at least for now. Earlier this month, the company reportedly announced it would cancel the remainder of its work with sensitive content, both for OpenAI and others to instead focus on “computer vision data annotation solutions.”

The report reveals, in explicit detail, the toilsome human hardship underpinning supposedly “artificial” technology. Though new, seemingly frictionless technologies created by the world’s top tech companies often advertise their ability to solve big problems with lean overheads, OpenAI’s reliance on Kenyan workers, like social media companies large army of international content moderators, sheds light on the large human labor forces often inseparable from an end product.