AI More Dangerous Than Nukes

Musk has been instrumental in developing generative AI tech, first as a major investor in OpenAI, but then as a major competitor to the creator of ChatGPT. On the surface, Musk has always warned of the dangers of AI, while also trying to develop it, creating a strange kind of cognitive dissonance.
“Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes,” Musk tweeted in August 2014.
Many people pointed out at the time this was a rather absurd thing to say, even if all our brains have been poisoned by dystopian sci-fi. And it’s easy to see how people could still be worried about this stuff due to the sensationalistic media coverage.
Large-language models have opened up new doors for people in the world of chatbots and Musk has clearly been impressed, launching his own version with Grok on X and starting his own AI company headquartered in Nevada. Tools like ChatGPT, Grok, and Gemini allow people to ask questions and receive a response that sounds remarkably like a human. But the dirty little secret about this AI hype cycle is that even though they sound like they’re applying logic and reason to various questions, that’s not what’s happening under the hood.
These LLMs work as predictive text machines. They work by spitting out words at an incredibly rapid rate that can trick us into thinking there’s some real “thinking” going on. But if you ask the chatbots illogical questions they’ve never heard before, it can become clear they’re not actually applying any kind of deep reasoning.
It’s entirely possible that AI could be very dangerous in the future, but the idea that it’s because Skynet is just over the horizon simply isn’t grounded in reality. The danger of AI is people believing it can do many more things beyond its real capabilities.