After OpenAI’s ChatGPT burst onto the scene in late 2022, it wasn’t long before mainstream America started hearing about the warnings. Executives at the top AI companies told us that they were building a radical new technology that posed imminent risks to society. And it wasn’t just about digital security. AI had the power to destroy the entire world.
From the jump, it was clear that these warnings were as much a sales tactic as they were an earnest prediction of how AI would behave and the ripple effects it would create. AI execs even testified in Congress to tell us how scary it all was, practically begging for regulation, all while hawking their wares to the government. Now, those execs are the ones telling everyone to calm down.
Chris Lehane, OpenAI’s global policy chief, sat down for an interview with the San Francisco Standard this week in the wake of at least one attack on CEO Sam Altman’s home.
“Some of the conversation out there is not necessarily responsible,” Lehane told the Standard. “And when you put some of those thoughts and ideas out there, they do have consequences.”
Lehane was referring to the person who allegedly threw a Molotov cocktail at Altman’s house a week ago. Twenty-year-old Daniel Moreno-Gama of Texas was charged with throwing an incendiary device at Altman’s home before going to OpenAI’s headquarters, where he hit the glass doors with a chair.
Moreno-Gama was carrying an anti-AI “document,” according to police, suggesting his motivations were related to concerns over artificial intelligence and existential threats. The Wall Street Journal reports that he had called for “Luigi’ing some tech CEOs,” a reference to Luigi Mangione, who’s been charged with murder for killing UnitedHealth’s CEO.
A second incident, just two days later, in which two people supposedly shot a gun near Altman’s home, is still under investigation, though the initial suspects have been released from jail.
Lehane divides the world into two groups of people: those who think AI is the greatest thing ever, and will inevitably lead to a world of abundance and leisure; and those whom he calls doomers, who “have a very, very negative and dark view of humanity.”
The so-called AI doomers simply aren’t being sold properly on the benefits of this new tech, Lehane argues. “Our job at OpenAI and in the AI space — and we need to do a much better job — is to explain to people why … this is going to be really good for them, for their families and for society writ large,” Lehane told the Standard.
But it’s hard to take that argument seriously after everything guys like Altman have been saying. It didn’t even start as late as 2022, either. Back in 2015, Altman said, “I think that AI will probably, most likely, sort of lead to the end of the world. But in the meantime, there will be great companies created with serious machine learning.”
How do you hear something like that from a powerful person and just accept it? You have two options: You can dismiss Altman as unserious and assume humanity should do nothing. Or you can take the tech CEOs at their word that the tech they’re building could end the world. Which leaves you with the question of what you can do about that.
No fate but what we make
We know what happens in dystopian fiction. In Terminator 2: Judgement Day, Sarah Connor decides they need to kill the researcher most responsible for starting Skynet and the rise of the machines. She can’t bring herself to do it, but after explaining what will happen in the future, the researcher helps gain access to the technology so that it can be destroyed.
Altman has also warned that AI could be used to “design novel biological pathogens” and signed onto a letter about the “risk of extinction,” if AI isn’t tamed. But he’s also tried to claim that the U.S. needs to be the one developing these potentially catastrophic technologies because leaving that to geopolitical adversaries carries risks in itself.
“A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too,” Altman wrote in 2023.
I turned to Altman’s product, ChatGPT, to ask about his comments on existential threats to humanity. Specifically, I asked if Altman had talked about rogue AI or the end of the world on the Joe Rogan podcast. Hilariously, ChatGPT said he hadn’t appeared on Rogan. Altman did, in fact, appear on Episode 2044 of the Joe Rogan Experience, first released on October 6, 2023.
I corrected ChatGPT, and it did the now-cliche, ‘you’re right etc, etc.’ The quotes it gave me:
- “There are risks… if this technology goes wrong, it can go quite wrong.”
- “The thing that I worry about is we lose control of the systems…”
- “This could go really, really wrong… like lights-out wrong.”
That last quote isn’t accurate, as far as I can tell. It’s not in YouTube’s transcript for the episode. But Altman did say something very close to that in an interview with the StrictlyVC podcast. “The bad case—and I think this is important to say—is, like, lights-out for all of us,” Altman explained to a room full of people. Close, but not exact, which perhaps demonstrates how AI systems are failing people in their lived experience.
Anthropic CEO Dario Amodei has made similar statements, telling Axios earlier this year that, “Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.” Amodei claims that “AI-enabled authoritarianism terrifies me.”
Amodei has also warned that anyone with a STEM degree could make a bioweapon with the help of AI models, and he has called for guardrails. Some of those guardrails have gotten Anthropic into trouble, since the Pentagon blacklisted the company and is in the process of purging Claude from its systems. Amodei had refused to drop protections that prohibited the use of Claude for mass domestic surveillance and autonomous weapons systems.
If someone testifies that they’ve made a tool that could potentially end the world, you’d expect that person to be immediately marched out in handcuffs. That’s an idea that was floated to me third-hand a couple of years ago, and I wish I knew who originally said it. But it’s spot-on.
Think about it in any other context. Someone says that they’ve built a weapon that could go rogue and literally end life on planet Earth. Does the federal government just act like the only fix is light regulations that tinker around the edges? Or do the executives at that company get rounded up and tossed in jail for making terrorist threats?
Threatening to eliminate livelihoods altogether is a threat to human life
Aside from the rise of Skynet, there’s obviously the pressing matter of job displacement. Many companies have cited AI as a reason for layoffs in the past year, even if they sometimes have an incentive to use that as a convenient excuse. But there’s no denying that AI is now good enough at writing and other white-collar work to cause some kind of disruption in the labor market.
The AI CEOs are keen to tell everyone that these disruptions are coming, insisting that the government should do something about it, while also lobbying that same government to keep it out of their hair. Perhaps no one exemplifies this attitude better than Elon Musk, whose company xAI makes the Grok AI chatbot.
“Universal HIGH INCOME via checks issued by the Federal government is the best way to deal with unemployment caused by AI,” Musk wrote on Friday. “AI/robotics will produce goods & services far in excess of the increase in the money supply, so there will not be inflation.”
I’ve argued before that it’s ridiculous for Musk to insist we’ll have a world of utopian abundance provided by the government. During Musk’s time as President Trump’s henchman last year, the billionaire helped with the complete destruction of USAID, cut funding for vital programs, and railed against people he claimed were milking the system.
His so-called Department of Government Efficiency (DOGE) helped purge roughly 300,000 federal employees, and he made it his mission to say that undeserving people didn’t deserve government handouts. Now this is the guy who says you shouldn’t worry about AI because the government is going to hand out free money? Absurd.
Why would anyone try to sell the public a product on the idea that it’s going to take their job? Because the pitch is meant for investors, the government, and the people who purchase enterprise software for companies. You should focus on making your avatar look like a Studio Ghibli movie.
An unelected ruling class making decisions for all
The AI elites are all selling their products as inevitable. Part of their sales pitch is that there’s nothing you can do to stop any of it. And the public just needs to accept it while finding ways to work within a system where AI causes job losses. These oligarchs—and they are very much oligarchs, vying to be the favored members of the ruling class—were not elected. But they will nonetheless dictate what your life looks like in the next year, five years, or 20 years, if you’re lucky enough to survive the robot uprising.
Altman himself wrote a blog post a week ago after the attack on his home. He shared a photo of his husband and child, “in the hopes that it might dissuade the next person from throwing a Molotov cocktail at our house, no matter what they think about me.” And it seems Altman is doing his best to humanize himself to dissuade more potential attacks.
Whatever happens, it feels like the AI executives have painted themselves into a corner. They’ve told everyone their product has the potential to destroy everything. They were the doomers, if we want to call it that, at least when it was convenient. And now we seem to be entering a different era where the same people who told us about the dangers of AI try to get us to look exclusively at what they claim are enormous benefits for society; so far, with little to show.
It’s unclear how you put that doomer genie back in the bottle.