Noted killer robot-fearer Elon Musk has a plan to save humanity from the looming robopocalypse: developing advanced artificial intelligence systems. You know, the exact technologies that could lead to the robopocalypse.


Let’s unpack that one a little bit.

Yesterday, Tesla’s boss, along with a band of prominent tech executives including Linked in co-founder Reid Hoffman and PayPal co-founder Peter Thiel, announced the creation of OpenAI, a nonprofit devoted to “[advancing] digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”


The company’s founders are already backing the initiative with $1 billion in research funding over the years to come. Musk will co-chair OpenAI with venture capitalist Sam Altman.

As Altman explained in an interview, the premise of OpenAI is essentially that artificial intelligence systems are coming, and we’d like to share the development of that technology amongst everyone, not just Google’s shareholders. This sounds fine—great, even.

The weird part is this justification for doing so: essentially, Musk and Altman seem to think kickstarting the open-AI revolution is the only way to save us from SkyNet. Here’s Altman’s response to a question about whether accelerating AI technology might empower people seeking to gain power or oppress others:


“Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think its far more likely that many, many AIs, will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else,” Altman said. “If that one thing goes off the rails or if Dr. Evil gets that one thing and there is nothing to counteract it, then we’re really in a bad place.”

So, when the killer robots come—and make no mistake, they ARE COMING—Musk, Altman and their band of Avengers will all be able to fight back....with their own killer robots? If this sounds eerily reminiscent of the “a good guy with a gun would’ve stopped that bad guy with a gun” argument, that’s because it’s the same exact logic. Except, applied to a world where guns don’t even exist yet.



Another idea? We could stop trying to build superintelligent AI. That would probably be the safest course of action if we really, truly thought the machines were going to try and wipe us out.

[BackChannel h/t The Guardian]

Follow the author @themadstone


Top image: Francois Mori/AP