While the comparison between the computer and the human brain is one that has been made for over half a century, the way each one processes information could not be more different. Now, IBM researchers have designed a revolutionary chip that, for the first time, actually mimics the functioning of a human brain.
Are we finally on the verge of true artificial intelligence?
Earlier this year, IBM's Watson computer made history by trouncing Jeopardy champs Ken Jennings and Brad Rutter in an intimidating display of computer overlord-dom. But to compare Watson's computing power to the complexity of the human brain would still constitute a pretty epic oversimplification of what it means to "think" like a human.
When the human brain formulates a thought, learns a new skill, or digs deep in its archives to recover a memory, it does so in a uniquely dynamic way. There are billions upon billions of neurons in that head of yours, and the strength and number of each one's connections with other neurons is constantly in flux. The plastic nature of these neural networks allow for computation and memory to become closely intertwined, the result being a fantastically efficient and powerful "processor."
Computers, by comparison, must trudge through information one bit at a time, channeling each bit back and forth between connected, but discrete, processor and memory units. The more complicated the task, the more bits of information the computer needs to shift back and forth between its distinct components.
Some people may object to the use of the word "trudge" to describe the way a computer goes about making sense of information, but compared to the efficiency of the brain there's just no other way to describe it. Sure, modern computers may go through impressive amounts of information at impressive speeds, but that's due in no small part to the enormous quantities of power that this process requires.
Consider, for example, that Watson needed 16 terabytes of memory, 90 powerful servers, a total of 2880 processor cores, and mind-boggling quantities of electrical power just to wrap its big computery head around the concept of wordplay. The idea of fitting all that hardware inside a space as small as your head (no offense) and making it run on 10 watts of power has long been the stuff of fantasy.
But all that could soon change in a big way, thanks to developments in the field of cognitive computing. Today, a team of scientists led by IBM researcher Dharmendra Modha have announced the creation of two demonstration chips that not only store and process information in close parallel, the way a human brain does, but actually possess "neurons" and "synapses" (the artificial neurons and synapses numbering in the hundreds and thousands, respectively) that will soon be capable of forming, strengthening, and breaking connections on the fly. What's more, it does it all with about 1000 times less power than your conventional computer.
The architecture behind these microchips flies in the face of everything we know about today's step-by-step, sequential methods of computing. The researchers have called the design a "neurosynaptic core."
The public probably won't see these neurosynaptic cores in its technology for at least another ten years. (DARPA, on the other hand, which has funneled over 40 million dollars into the cognitive computing project, may be an entirely different story.)
According to Modha, the team's eventual goal is "a human-scale cognitive-computing system." What does that mean? It means that IBM believes these revolutionary chips represent the beginnings of something huge. Like, a chip with 10 billion neurons and 100 trillion synapses huge; as in a computer-the-size-of-a-shoe-box-that's-about-half-as-complex-as-a-human-brain huge.
In other words: if you've ever wondered what the singularity smells like, take a good whiff; this is probably about the closest we've ever been.
Via Technology Review and The New York Times
Top image via Takito/Shutterstock
Image of neural network via