Reverse-Engineering of Human Brain Likely by 2020

Illustration for article titled Reverse-Engineering of Human Brain Likely by 2020

Reverse-engineering the human brain so we can simulate it using computers may be only a decade away, says Ray Kurzweil, artificial intelligence expert and author of the best-selling book The Singularity is Near.

Advertisement

It would be the first step toward creating machines that are more powerful than the human brain. These supercomputers could be networked into a cloud computing architecture to amplify their processing capabilities. Meanwhile, algorithms that power them could get more intelligent. Together these could create the ultimate machine that can help us handle the challenges of the future, says Kurzweil.

This point where machines surpass human intelligence has been called the "singularity." It's a term that Kurzweil helped popularize through his book.

Advertisement

"The singular criticism of the singularity is that brain is too complicated, too magical and there's something about its properties we can't emulate," Kurzweil told attendees at the Singularity Summit over the weekend. "But the exponential growth in technology is being applied to reverse-engineer the brain, arguably the most important project in history."

For nearly a decade, neuroscientists, computer engineers and psychologists have been working to simulate the human brain so they can ultimately create a computing architecture based on how the mind works.

Reverse-engineering some aspects of hearing and speech has helped stimulate the development of artificial hearing and speech recognition, says Kurzweil. Being able to do that for the human brain could change our world significantly, he says.

The key to reverse-engineering the human brain lies in decoding and simulating the cerebral cortex - the seat of cognition. The human cortex has about 22 billion neurons and 220 trillion synapses.

Advertisement

A supercomputer capable of running a software simulation of the human brain doesn't exist yet. Researchers would require a machine with a computational capacity of at least 36.8 petaflops and a memory capacity of 3.2 petabytes - a scale that supercomputer technology isn't expected to hit for at least three years, according to IBM researcher Dharmendra Modha. Modha leads the cognitive computing project at IBM's Almaden Research Center.

By next year, IBM's ‘Sequoia' supercomputer should be able to offer 20 petaflops per second peak performance, and an even more powerful machine will be likely in two to three years.

Advertisement

"Reverse-engineering the brain is being pursued in different ways," says Kurzweil. "The objective is not necessarily to build a grand simulation - the real objective is to understand the principle of operation of the brain."

Reverse engineering the human brain is within reach, agrees Terry Sejnowski, head of the computational neurobiology lab at the Salk Institute for Biological Studies.

Advertisement

Sejnowski says he agrees with Kurzweil's assessment that about a million lines of code may be enough to simulate the human brain.

Here's how that math works, Kurzweil explains: The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits, which is about 800 million bytes before compression, he says. Eliminating redundancies and applying loss-less compression, that information can be compressed into about 50 million bytes, according to Kurzweil.

Advertisement

About half of that is the brain, which comes down to 25 million bytes, or a million lines of code.

But even a perfect simulation of the human brain or cortex won't do anything unless it is infused with knowledge and trained, says Kurzweil.

Advertisement

"Our work on the brain and understanding the mind is at the cutting edge of the singularity," he says.

See Also:

Photo: A graphic overlay shows neural connections on a scan of IBM researcher Dharmendra Modha's brain/IBM

Advertisement
Illustration for article titled Reverse-Engineering of Human Brain Likely by 2020

Wired.com has been expanding the hive mind with technology, science and geek culture news since 1995.

Advertisement

Share This Story

Get our newsletter

DISCUSSION

flops != neurons

lots of flops != cognition

There is no reason to assume excelling in flops would relate to excelling in machine intelligence.

The actual singular criticism of the "singularity" isn't that "the brain is too complicated", it is that there is no reason make the wild and baseless assumption that technological progress measured on an arbitrary scale could say anything meaningful about what we do not yet understand, and by what date we will understand it. That is bad Bad BAD science.

Cochlear implants are a totally different ball game compared to stuff actually inside the brain, and the neurons you need to experiment with to do that research are accessible and easy to interface with, and pretty basic from a signaling approach compared to anything inside the head. You can't even get an electrode to stay put in the brain without getting extremely exotic when it comes to coatings and structure which prevent the body from responding to it in a way that screws up your experiment, and even then it still does. The impedance changes as a totally nonlinear function of time, the electrode moves, the environment around it changes due to its presence. It's damn hard to make the same measurement twice under stable conditions, let alone the millions of times you'd need to really start to understand anything about how cognition could work.

Speech processing, what he's referring to is kind of misleading. Once you decide you want to go about it by identifying phonemes and mapping their sequences to words (you have a look up table for that), you do need a way of doing the classification part (taking the 1D audio signal finding what snips of time are what phoneme without prior knowledge). You can use neural networks for that. "Neural networks" has the word "neural" in it, but you are not using them in any way that has to have anything to do with how the brain processes sound and speech. You could also used matched filtering, SVM, bayesian classifiers, bacterial forge optimization, whatever works best, do do the classification task. The process does not mimic what we know about the brain at all. It is derivative from what we know about our own technology and math.

So welcome to the "transscience" dimension, one and all! The place where hard science (or at least its vocabulary) and cultism, people's age old fear of death, and new age spirituality all mingle.

Someone's mapping all the synapses in the brain. How do they know what parameters to measure since we still have no idea how it functions. That's like being told to "measure your gandmother we want to map her". What information about her do we want to retain? Height and weight? Response time? Mood? Smell? So back to the brain How does data get stored? How does data get recalled and manipulated. If we don't know these details how can we know what to measure to build our map and how can we build a computational model of it? A model needs to capture the underlying process without being infinitely complex. How do we make design decisions (what can be thrown out) without understanding the process?

Really the assumption which could blow all of this out of the water is that the human mind is computational in nature (and therefore can be simulated by computational means). I'm not saying there's some magic fairy dust making us tick, but there are lots of physical systems (and lots more biological systems) that aren't computational. Just because a system has complex behaviors and the most complex systems humans can create synthetically are computational doesn't mean that all complex systems which display complex behaviors need to be or are computational in nature.

Other more personal problems:

1) Ray is not an AI expert, at all. He writes AI themed pop-sci books full of conjecture and predictions based on a superficial understanding of the state of the art. Real AI cutting edge work is boring and unsexy unless you have a math fetish.

2) Back of the envelope calculations relating Moore's law (which is not a law, more like the industry's self-fulfilling prophecy) to the capacity to create machine intelligence are completely kind of unfounded when you think about it. It is as enane as saying if you simply have a set of oil paints you can recreate everything in the Louvre shortly after you get them, without knowing how to paint first. Can you imagine how insulting statements like that would be to dedicated artists? It's the same for computer scientists when Ray and his cult buddies mouth off.

3) AI research funding has ebbed and flowed, suffering several funding "winters". This makes it hard as hell to get anything done. Guy's like Ray are part to blame. They are renowned for something science/engineering related which has nothing to do with AI. They extrapolate wildly with no background in this stuff other than science-new-age-philosophy nonsense and people listen to them because of their seemingly related tech credentials. People have false expectations and then researchers who want funding extrapolate wildly from what they know is real to try to land some funding, adding to the wrong perception. The research is completed and it is not what the sponsors imagined because what they imagined wasn't real. No one funds AI research which shares the same buzz words/vocabulary as the last failed attempt for about 10 years, or until program manages circulate enough to forget about the last big screw up.

4) The problem with every thing these pop-sci guys say is that they ASSUME that everything (technology, algorithms, models) are infinitely extensible. Turns out that this is not the case with nearly anything in AI, or anything at all. You can't turn on a neural network and pump data into it for 20 years and have it grow into a young adult. Learning algorithms learn within the confines of what they are programmed to learn, and have hard limits. There comes a point when the assumptions you used to define your model will no longer hold, or your problem size grows to a point rendering a method you were using invalid.

5) The problem I have with these guys is they sit around and make these hugely broad generalizations about what we'll have in 20 years and think they are contributing something. Like broad baseless generalizations are HARD to think up. Like they are some kind of visionary because they do the same thing that sci-fi writers have always been doing except their stories have boring plots and fallaciously ended up in the non-fiction section of Barnes and Nobel. At least sci-fi writers are honest, while what they do is often extrapolated from real science and technology, they call what they do fiction. In that way it is inspiring rather than misleading. It's like the difference between a magician and a psychic.

ughh

sorry for rant, (that's ok, big blocks of words are usually skipped on the inter-nets anyway)

-D