After becoming the new hotness for fans of surreal insanity, the never ending AI-generated stream inspired by the 90s sitcom Seinfeld called Nothing, Forever has been temporarily kicked off the air. Just like some other famed comedians, the series main character “Larry Feinberg” was slapped down hard after making an ill-fitting transphobic and homophobic joke.
Each “episode” of Nothing, Forever contains a section where Larry performs a comedy set akin to what Jerry Seinfeld does at the start of the real-life show. As first reported by Vice, Twitch issued a 14-day ban on Nothing, Forever Sunday night after video showed Larry dive into Dave Chappelle-levels of anti-self-reflection.
“There’s like 50 people here and no one is laughing. Anyone have any suggestions?” Larry starts, sounding like a comedian who’s already failed to read the room. It gets much worse from there.
“I’m thinking about doing a bit about how being transgender is actually a mental illness. Or how all liberals are secretly gay and want to impose their will on everyone. Or something about how transgender people are ruining the fabric of society. But no one is laughing, so I’m going to stop.”
Twitch has rules against homophobic or transphobic language, though normally, Larry’s jokes are blindingly absurd rather than obviously hurtful. He randomly switches from puns to long stories that seem like a failed parody of an actual standup set. If this transphobic joke seems off-base, it’s because some AI systems are subject to intense bias.
On the Nothing, Forever Discord, Skyler Hartle, one of the two main creators of the project, wrote that on Sunday, they started seeing issues with OpenAI’s GPT-3 Davinci model. It’s the latest version of the open source language generation model, but the program started to glitch which caused some rooms to cycle through. The developers then switched to Davinci’s predecessor called Curie. Curie used training data dated up to 2019, and compared to Davinci which includes data up to 2021.
“[We] will not be using Curie as a fallback in the future,” Hartle wrote.
In an update posted later Monday morning, the developers said they “mistakenly” thought Nothing, Forever was using OpenAI’s content moderation system, but that had never been put in place. The group wrote they are implementing the content moderation API before they go live again. They also said they were investigating secondary content moderation systems as an additional fallback.
Modern AI models are based on training data scraped from the internet at large, which comes with a whole host of awful content dredged up from the worst parts of the internet. Without any further moderation of the AI systems like image generators, they can display an inherent bias that comes directly from attitudes and comments expressed on the internet. Notably OpenAI’s now-famous ChatGPT contracted hundreds of low-paid workers in Kenya to sift through millions of examples of training data to remove examples of child sexual abuse content, murder, rape, and more obscene content.