This is What Eric Schmidt Thinks About AI Fears

Image: Getty
Image: Getty

Never mind that Google is working on a kill switch to control robots, executive chairman Eric Schmidt thinks there’s nothing to worry about because “the state of the earth” does not support these killer AI scenarios. In other words, let’s come back to reality here.

Advertisement

Speaking at Stockholm’s Brilliant Minds conference, Schmidt acknowledged the fears over superintelligent AI that people like Stephen Hawking and Elon Musk have raised. The latter even hinted recently that Google itself was the only AI company that worried him.

Schmidt’s response boils down to this. Stephen Hawking? Brilliant guy! But not a computer scientist. Elon Musk? Brilliant guy! Also not a computer scientist.Stay in your lane, folks.

Advertisement

He adds:

The scenario you’re just describing is the one where the computers get so smart is that they want to destroy us at some point in their evolving intelligence due to some bug. My question to you is: don’t you think the humans would notice this, and start turning off the computers? We’d have a race between humans turning off computers, and the AI relocating itself to other computers, in this mad race to the last computer, and we can’t turn it off, and that’s a movie. It’s a movie. The state of the earth currently does not support any of these scenarios.

Let’s hope he’s right.

[Business Insider]

Advertisement

Angela Chen is the morning editor at Gizmodo.

Share This Story

Get our newsletter

DISCUSSION

As a person who has worked in the field all their life... there are a bunch of things that would have to happen for a rampant AI to be able to come close to destroying us.

1) AI would have to see humanity as an existential threat after gaining self-awareness. We assume sapience, as the AI’s design would necessarily include lots of data to make it useful and sensors to collect that information. But we’d have to also include sentience into that mix. Any less of a system would not have the capacity to grow its influence beyond a fixed set of systems connected to it. And while even simple heuristic systems can lose the plot - so to speak - due to bad design or bad data, those systems often end up crashing. Bad if its a stock trading or traffic control system... but otherwise a non-lethal threat in most cases.

2) We see the calculations done by modern computers and are impressed by the rate at which those calculations are done. We extrapolate this capacity to anything that a computer can be theoretically assigned to calculate. That is a classic example of bias - we are attributing capability where none exists. Since we don’t have any “thinking” computers on the level of an AI, we have no idea how they’d perform. There is not alot of power required for sorting, for example... but understanding requires a far more complex set of rules and a much more flexible algorithm.

3) AI will require specialized hardware. We will have to develop artificial neurons, and create neural processors. That means we will have to more deeply understand the structures of organic brains... and then we have to derive the operating parameters for those brains. That will give us an underlying “base code” which will give us the “machine language” of an AI. Then... we have to translate that into a programming language we can understand more clearly and use linguistic syntax to define the logic systems. This is not a small undertaking, considering our understanding and capability in this field is not even past the Charles Babbage stage. Also... if you consider the kinds of knowledge we’d have to gather - you will also see that being able to program such an artificial system will necessarily require us to be able to also program the organic system upon which it is based. That’s a whole other can of worms.

4) Even if we achieve all the necessary steps for AI... this all assumes that we will be so complacent... so hostile... and so ignorant of our creation that we will not notice the signs of a core problem evolving. It would have to be so off the rails that destroying itself along with humanity would have to be a good idea. But it would also have to be able to outwit us and connect to a large number of systems, and figure out ways to connect to systems that would be isolated, in order to achieve this - all without our awareness. The closest we’ve come to that is 1940's Germany. That got noticed and stopped... but not before a great human catastrophe. All this also assumes that we will not be anywhere near the same level of capacity as the AI... which is laughable, considering the nature of the computer is as a human assistive device. Its a device we created to help us in something we’re particularly bad at doing ourselves. Which causes the bias I noted in 2. We’ll likely have BCI and be able to program our own minds and give ourselves the benefits of computers long before then.

So... am I worried about AI? No. Its going to be a long time before an AI has the capacities for that kind of destruction. And that’s something that will crop up long after we’ve created the first AI. Which, in itself... will take a significant portion of time.

We are more likely to destroy ourselves out of petty ignorance and stupidity... and the AI is actually more likely to save us even if it is completely selfish. Because us dying means it dies. There’s no logic in which that actually computes to a positive outcome.