The meeting in full. Max Tegmark’s talk begins at 1:55, and Bostrom’s at 2:14.

The meeting featured two prominent experts on the matter, Max Tegmark, a physicist at MIT, and Nick Bostrom, the founder of Oxford’s Future of Humanity Institute and author of the book Superintelligence: Paths, Dangers, Strategies. Both agreed that AI has the potential to transform human society in profoundly positive ways, but they also raised questions about how the technology could quickly get out of control and turn against us.

Advertisement
Advertisement

Last year, Tegmark, along with physicist Stephen Hawking, computer science professor Stuart Russell, and physicist Frank Wilczek, warned about the current culture of complacency regarding superintelligent machines.

“One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand,” the authors wrote. “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

Advertisement

Nick Bostrom (Credit: UN Web TV)

Indeed, as Bostrom explained to those in attendance, superintelligence raises unique technical and foundational challenges, and the “control problem” is the most critical.

Advertisement

“There are plausible scenarios in which superintelligent systems become very powerful,” he told the meeting, “And there are these superficially plausible ways of solving the control problem—ideas that immediately spring to people’s minds that, on closer examination, turn out to fail. So there is this currently open, unsolved problem of how to develop better control mechanisms.”

That will prove to be difficult, says Bostrom, because we’ll need to actually have these control mechanisms before we build these intelligent systems.

Advertisement

Bostrom closed his portion of the meeting by recommending that a field of inquiry be established to advance foundational and technical work on the control problem, while working to attract top math and computer science experts into this field.

He called for strong research collaboration between the AI safety community and AI development community, and for all stakeholders involved to embed the Common Good Principle in all long range AI projects. This is a unique technology, he said, one that should be developed for the common good of humanity, and not just individuals or private corporations.

Advertisement

As Bostrom explained to the UN delegates, superintelligence represents an existential risk to humanity, which he defined as “a risk that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development.” Human activity, warned Bostrom, poses a far bigger threat to humanity’s future over the next 100 years than natural disasters.

“All the really big existential risks are in the anthropogenic category,” he said. “Humans have survived earthquakes, plagues, asteroid strikes, but in this century we will introduce entirely new phenomena and factors into the world. Most of the plausible threats have to do with anticipated future technologies.”

Advertisement

It may be decades before we see the kinds of superintelligence described at this UN meeting, but given that we’re talking about a potential existential risk, it’s never to early to start. Kudos to all those involved.


Email the author at george@gizmodo.com and follow him at @dvorsky. Top image by agsandrew/Shutterstock