The Future Is Here
We may earn a commission from links on this page

The Unexpected Places Where Artificial Intelligence Will Emerge

We may earn a commission from links on this page.

We often imagine that artificial intelligence will be created by mad engineers who build a big box that starts thinking. But is that really how would AI develop in real life? Probably not. AI could emerge from computer systems that are part of our everyday lives right now. Here are some possibilities.

Photo: Google/Connie Zhou

Autonomous War Machines

Companies like iRobot, and labs like Boston Dynamics (the one that created the infamous Big Dog robot), are all working on creating military robots that are as lifelike as possible. What if it turns out that human-like intelligence isn't a something that starts in our brains alone, but is a function of our brains' relationship with our bodies? That's what MIT roboticist Cynthia Breazeal believes — and that means AI is most likely to emerge from robots, who make decisions and learn from their environments with their bodies.

Advertisement

The autonomous robots we build for war, from humanoid or dogoid ones to UAVs, could be the place where AI is born. How would it happen? Would it be instantaneous, like when Cyberdyne boots up Skynet in Terminator? Or would it be something that happens gradually, as these soldier robots fight in battle after battle, slowly becoming self-aware? That's the premise of Ken MacLeod's profoundly humane novel The Night Sessions, where battle robots gain what MacLeod calls "kinetic intelligence" during traumatic war experiences.

Advertisement
Advertisement

Google Search

Maybe AI won't need a body to evolve at all. Maybe it just needs billions of examples of social interactions to figure out how to think like a human. That's why Google founder Larry Page is so excited about the idea that his company could be the place where AI is born. (This is also the premise of Robert Sawyer's Wake series, which he researched at Google.)

What does an AI need in order to become self-aware? Perhaps it needs the ability to recognize social cues and objects, just like humans do, before making a decision. Google's massive server farms are packed with information about what humans find relevant, what kinds of things they group together, and how the react to them. Perhaps an AI will arise out of this tangle of human questions and categorizations, a kind of hive mind that is also a single mind. In a sense, this is what William Gibson suggests will happen in Neuromancer, when the first AI is born out of the substance of cyberspace itself.

Advertisement

Netflix and Amazon Recommendation Engines

How does a human infant evolve from a blob of incoherent needs into a person with thoughts and plans and a sense of self? In part, they do it by figuring out what they want — and, more importantly, what the people around them want. Discovering other people's wants is fundamental to communication, language, and social bonds. Perhaps that means AI will grow out of a brilliantly-designed recommendation engine. First Netflix will start making genuinely good recommendations, and then it will become a self-aware entity that has its own taste in movies. If Amazon starts suggesting things that are actually relevant to your interests, you may be watching the birth of a new kind of intelligence.

Advertisement

Spam Filters

Like recommendation engines, spam filters are programs that are designed to learn human behavior and make decisions based on it. They have to be able to read, and to figure out when words are being tweaked in tricky ways to circumvent previous generations of spam filter. In Charles Stross' brilliant novel Rule 34, the author imagines that an artificial intelligence will emerge from the web of complex decision-making required to run a truly robust spam filter that accounts for all the human methods of scamming, railroading, and generally abusing each other. Also, like an AI that emerges from a recommendation engine, this AI would emerge in part by anticipating human desires and needs — which is about as human as it gets.

Advertisement
Advertisement

The NSA Surveillance System

The NSA surveillance system is like a spam filter for human behavior. It looks for transactions and behaviors that could be construed as abusive and flags them. The difference is that instead of trying to catch V!@gra emails over SMTP, it's trying to predict potentially criminal behavior by correlating data from social networks, financial transactions, surveillance cameras, and voice communications too. Because this requires an incredibly sophisticated analytical capability, as well as near-omniscient access to every human activity, the TV series Person of Interest has suggested that it could unexpectedly give rise to an AI.

Advertisement

Robotic Space Explorers

The semi-autonomous explorers that we've sent to Mars have to make decisions about how to navigate an unknown terrain without human input. There is a roughly 20-minute lag between Mission Control at JPL and the Mars Rover Curiosity. So if she's rolling along and encounters a rough patch, it's up to her alone to figure out how to remain balanced on her six wheels as she traverses it.

Advertisement

Future robot explorers on Jupiter's moon Europa will be dealing with a much longer lag — possibly an hour or more. That means we'll have to build them to make decisions about where to go and what to explore based on assessing their environments and making real-time decisions without us. This led science journalist Richard Rhodes, author of The Making of the Atomic Bomb, to comment at a recent SETI conference that AI might emerge from these space-faring robots. That would mean that the first AI would be a scientist explorer, its intelligence emerging as it explores a new environment with its body — much the way battlefield robots do, but with a very different motivation.

Advertisement

Financial Trade Systems

One of the big questions about all these emergent intelligences is what they will think of humans. Will they have feelings for us? Will they be able to outsmart us, or destroy us in ways we never thought possible? Jonathan Nolan, creator of Person of Interest, believes that an AI designed to stop criminal activity will be essentially benevolent. Charles Stross, author of Rule 34, imagines a sentient spam filter that is a kind of assassin.

Advertisement

In Ken MacLeod's novel The Night Sessions, where soldier robots become kinetically intelligent, the result is quite unexpected indeed. The battle-scarred robots become religious fanatics — some because they are desperate to heal from the traumas of war, but others because they loyally want to carry on the religious conflicts that spawned them.

Lurking beneath all these scenarios is the idea that these artificial intelligences will be smarter and more competent than us. Sure, robots might be stronger than us as well — but what they'll really have over us is a superior ability to strategize about how to use that strength. After all, many of these AI will be able to anticipate our behavior and our wants. So the question is, what will happen when we create beings who can rightly view us the way we view a particularly smart dog?

Advertisement

Nowhere does that question seem more urgent than in the realm of financial trading systems, where algorithms that buy and sell are able to conduct transactions in the fraction of a second. That leads to bizarre, emergent forms of behavior like the 2010 "flash crash," when massive numbers of micro-second transactions caused an unexpected, destructive dip in stock prices. What if the next unexpected behavior to explode out of financial trading systems were a form of superintelligence? We still don't quite understand how the flash crash happened, so what hope do we have of understanding the sentience behind it?

Advertisement
Advertisement

These are questions we can't answer now, just as we can't be sure where AI will first manifest itself. But just because we can't understand a new form of intelligence doesn't mean it will try to destroy or control us. It doesn't even mean that these intelligences will necessarily understand us. Yes, these are minds that will likely emerge out of systems that anticipate human needs and desires. But humans are not the sum of their financial and email transactions. We are emergent intelligences ourselves. Perhaps, when we finally meet our artificial counterparts, we will at last come to understand our own intelligence.

Annalee Newitz is the author of Scatter, Adapt and Remember: How Humans Will Survive a Mass Extinction. Follow her on Twitter.

Advertisement