Trailer for Limitless

Chemicals are not targeted enough to produce big gains in human cognitive performance. The evidence for the effectiveness of current "brain-enhancing drugs" is extremely sketchy. To achieve real strides will require brain implants with connections to millions of neurons. This will require millions of tiny electrodes, and a control system to synchronize them all. The current state of the art brain-computer interfaces have around 1,000 connections. So, current devices need to be scaled up by more than 1,000 times to get anywhere interesting. Even if you assume exponential improvement, it will be awhile before this is possible — at least 15 to 20 years.

Advertisement

Improvement in IA rests upon progress in nano-manufacturing. Brain-computer interface engineers, like Ed Boyden at MIT, depend upon improvements in manufacturing to build these devices. Manufacturing is the linchpin on which everything else depends. Given that there is very little development of atomically-precise manufacturing technologies, nanoscale self-assembly seems like the most likely route to million-electrode brain-computer interfaces. Nanoscale self-assembly is not atomically precise, but it's precise by the standards of bulk manufacturing and photolithography.

What potential psychological side-effects may emerge from a radically enhanced human? Would they even be considered a human at this point?

Advertisement

One of the most salient side effects would be insanity. The human brain is an extremely fine-tuned and calibrated machine. Most perturbations to this tuning qualify as what we would consider "crazy." There are many different types of insanity, far more than there are types of sanity. From the inside, insanity seems perfectly sane, so we'd probably have a lot of trouble convincing these people they are insane.

Even in the case of perfect sanity, side effects might include seizures, information overload, and possibly feelings of egomania or extreme alienation. Smart people tend to feel comparatively more alienated in the world, and for a being smarter than everyone, the effect would be greatly amplified.

Advertisement

Most very smart people are not jovial and sociable like Richard Feynman. Hemingway said, "An intelligent man is sometimes forced to be drunk to spend time with his fools." What if drunkenness were not enough to instill camaraderie and mutual affection? There could be a clean "empathy break" that leads to psychopathy.

So which will come first? AI or IA?

It's very difficult to predict either. There is a tremendous bias for wanting IA to come first, because of all the fun movies and video games with intelligence-enhanced protagonists. It's important to recognize that this bias in favor of IA does not in fact influence the actual technological difficulty of the approach. My guess is that AI will come first because development is so much cheaper and cleaner.

Advertisement

Both endeavours are extremely difficult. They may not come to pass until the 2060s, 2070s, or later. Eventually, however, they must both come to pass — there's nothing magical about intelligence, and the demand for its enhancement is enormous. It would require nothing less than a global totalitarian Luddite dictatorship to hold either back for the long term.

What are the advantages and disadvantages to the two different developmental approaches?

Advertisement

The primary advantage of the AI route is that it is immeasurably cheaper and easier to do research. AI is developed on paper and in code. Most useful IA research, on the other hand, is illegal. Serious IA would require deep neurosurgery and experimental brain implants. These brain implants may malfunction, causing seizures, insanity, or death. Enhancing human intelligence in a qualitative way is not a matter of popping a few pills — you really need to develop brain implants to get any significant returns.

Most research in that area is heavily regulated and expensive. All animal testing is expensive. Theodore Berger has been working on a hippocampal implant for a number of years — and in 2004 it passed a live tissue test, but there has been very little news since then. Every few years he pops up in the media and says it's just around the corner, but I'm skeptical. Meanwhile, there is a lot of intriguing progress in Artificial Intelligence.

Advertisement

Does IA have the potential to be safer than AI as far as predictability and controllability is concerned? Is it important that we develop IA before super-powerful AGI?

Intelligence Augmentation is much more unpredictable and uncontrollable than AGI has the potential to be. It's actually quite dangerous, in the long term. I recently wrote an article that speculates on global political transformation caused by a large amount of power concentrated in the hands of a small group due to "miracle technologies" like IA or molecular manufacturing. I also coined the term "Maximillian," meaning "the best," to refer to a powerful leader making use of intelligence enhancement technology to put himself in an unassailable position.

Advertisement

Image: The cognitively enhanced Reginald Barclay from the ST:TNG episode, "The Nth Degree."

Advertisement

The problem with IA is that you are dealing with human beings, and human beings are flawed. People with enhanced intelligence could still have a merely human-level morality, leveraging their vast intellects for hedonistic or even genocidal purposes.

AGI, on the other hand, can be built from the ground up to simply follow a set of intrinsic motivations that are benevolent, stable, and self-reinforcing.

Advertisement

People say, "won't it reject those motivations?" It won't, because those motivations will make up its entire core of values — if it's programmed properly. There will be no "ghost in the machine" to emerge and overthrow its programmed motives. Philosopher Nick Bostrom does an excellent analysis of this in his paper "The Superintelligent Will". The key point is that selfish motivations will not magically emerge if an AI has a goal system that is fundamentally selfless, if the very essence of its being is devoted to preserving that selflessness. Evolution produced self-interested organisms because of evolutionary design constraints, but that doesn't mean we can't code selfless agents de novo.

What roadblocks, be they technological, medical, or ethical, do you see hindering development?

Advertisement

The biggest roadblock is developing the appropriate manufacturing technology. Right now, we aren't even close.

Another roadblock is figuring out what exactly each neuron does, and identifying the exact positions of these neurons in individual people. Again, we're not even close.

Advertisement

Thirdly, we need some way to quickly test extremely fine-grained theories of brain function — what Ed Boyden calls "high throughput circuit screening" of neural circuits. The best way to do this would be to somehow create a human being without consciousness and experiment on them to our heart's content, but I have a feeling that idea might not go over so well with ethics committees.

Absent that, we'd need an extremely high-resolution simulation of the human brain. Contrary to hype surrounding "brain simulation" projects today, such a high-resolution simulation is not likely to be developed until the 2050-2080 timeframe. An Oxford analysis picks a median date of around 2080. That sounds a bit conservative to me, but in the right ballpark.

Advertisement

Top image: imredesiuk/shutterstock.