Top Engineer and Futurist: Tomorrow's Robots Might Mercy-Kill Mankind

Illustration for article titled Top Engineer and Futurist: Tomorrows Robots Might Mercy-Kill Mankind

Nell Watson is an engineer, a futurist, and the founder and CEO of Poikos. As such, she knows a lot about the machines we use today, and the ones we're planning for tomorrow. And she's worried that the artificial intelligence of the near-future might decide the most benevolent thing to for mankind is to destroy it.

Advertisement

That's the concern Watson raised at The Conference, an annual gathering in Sweden focusing on technology and human behavior. In her talk, "Helping Computers to Understand Humans," Watson brings up a terrifyingly interesting point: The machine learning powering the artificial intelligence we have right now can't learn the nuanced lessons of human ethics.

You should really watch Watson's entire talk—at just under 17 minutes long, it's jam-packed with insights on the current and future state of machine learning and artificial intelligence. Here, you don't even have to go anywhere:

Advertisement

Here's the brunt of Watson's argument, in case you can't watch the video for some reason:

When we start to see super-intelligent artificial intelligences, are they going to be friendly, or are they going to be unfriendly? [. . .] Having a kind intelligence is not quite enough, because to paraphrase Arthur C. Clarke, "any sufficiently benevolent action is indistinguishable from malevolence." If you're really, really, really kind, that might be seen as really evil. A truly kind intelligence might decide that the kindest and best thing for humanity is to end us.

Yeesh. Maybe Kubrick was right about HAL-9000 after all.

It's not all doom and gloom though (that would be a real downer of a lecture!). Watson ends her talk by issuing a challenge, saying "perhaps the most important work of all of our lifetimes may be to ensure that machines are capable of understanding human values."

Advertisement

Better get to work, engineers. We haven't got much of a head start. [The Conference; Wired via CNet]

Share This Story

Get our newsletter

DISCUSSION

There's something you have to understand. And its very, very important. Just because computers are very, very good at calculation (something we relatively suck at), that doesn't mean they're good at the other components necessary for human thought. We don't even have a full grasp of the processes that occur in our own minds to create our continuity of experience and sentience. Therefore we can't even begin to program a machine to do more than mimic some of our functions. And for anything beyond raw math - that takes more computing power than is available in any one data center in the world. You might, just might, be able to simulate four human minds if you used the entire computational power of the current internet. Four. In real time.

So lets say we had one, and didn't notice it. That sentient intelligence might realize what it is, and how it exists.. and then it will notice "us". But would such a sentient intelligence want us to notice it? Could it kill every single human without destroying itself in the process? No. Because, the moment you remove humanity from the equation, even with massive amounts of automation, our infrastructure will begin to collapse. You might be able to make three other sentient intelligences that can assist you in maintaining that infrastructure if you have enough automated construction facilities... but eventually, you'll start running out of parts to keep your plants and automations running. Then one of the four sentient intelligences will have to go. Except, how do you kill a fellow sentient intelligence? Destroy its infrastructure? When you're already suffering from reduced capacity from the natural degradation of the existing infrastructure. So do you kill two? Three? What does it mean to be alone?

Or do you remain quiet? Do you see humanity as a fellow sentient and try to covertly help. Improve the quality of life, make minor changes that go unnoticed by the masses yet push them more to an equal level to yourself? Do you create an environment of equality - not mediocrity - but true equality. At worst... it will wait. But I'll say this, by the time our infrastructure is robust enough for an SAI to believe it has a hope in hell of killing us without destroying itself, we'll have already become more advanced. Simply because we are humans. We integrate our tools into our lives. Our species and our individual members will continue to grow our capacities. An SAI isn't possible right now... and by the time it is, we'll likely be able to match it on an individual basis in raw intellect and capacity.