The Future Is Here
We may earn a commission from links on this page

Would it be evil to build a functional brain inside a computer?

We may earn a commission from links on this page.

There’s been a lot of talk recently about using supercomputers to simulate the human brain. But as scientists get progressively closer to achieving this goal, they’re going to have to consider the ethics involved. By making minds that live inside machines, we run the risk of inflicting serious harm.

Advertisement

Brain mapping is all the rage right now. Europeans have their $1.6 billion Human Brain Project, and Obama recently okayed the US’s $100 million brain mapping initiative. There’s also Sebastian Seung’s efforts to map the brain’s connectome, and the OpenWorm project — a plan to simulate the C. elegans nematode worm in a computer. And recently, a team comprised of artificial intelligence theorists, roboticists, and consciousness experts announced their intentions to develop a robot with the intelligence of a three-year-old child.

Advertisement
Advertisement

The breakthroughs are starting to come in. Just last week, European scientists produced the first ultra-high resolution 3D scan of the entire human brain. They captured the brain’s physical detail at the astonishingly low resolution of 20-microns.

Given all this, it’ll only be a matter of time before scientists take all this newfound insight and start to build brains inside of computers. At first, these emulations will be simple. But eventually, they’ll exhibit capacities that are akin to the real thing — including subjective awareness.

Advertisement

In other words, consciousness.

Or sentience. Or qualia. Or whatever else you want to call it. But whichever words we choose to use, we’ll need to be aware of one incredibly important thing: These minds will live and have experiences inside of computers. And that’s no small thing — because if we’re going to be making minds, we sure as hell need to do it responsibly.

Advertisement

'We want to be good'

This was the topic of Anders Sandberg’s talk at the recently concluded GF2045 congress held in New York City. Sandberg, a neuroscientist working at the University of Oxford’s Future of Humanity Institute, is concerned about the harm that could be inflicted on software capable of experiencing thoughts, emotions, and sensations.

Advertisement
Advertisement

“We don’t want to build a future build on bad methods,” he told the audience, “Ethics matter because we want to be good.”

But as his presentation suggested, it’s not going to be easy. In discussing the potential for virtual lab animals, Sandberg noted that we can’t do simulations of testing on animals until we develop accurate simulations — which will likely require testing on lab animals. We’re having a hard time wrapping our heads around real animals having moral worth, let alone the idea of emulations carrying moral weight.

Advertisement

Sandberg quoted Jeremy Bentham who famously said, “The question is not, can they reason? Nor can they talk? But can they suffer?” And indeed, scientists will need to be very sensitive to this point.

Advertisement

Sandberg also pointed out the work of Thomas Metzinger, who back in 2003 argued that it would be deeply horrendously unethical to develop conscious software — software that can suffer.

Metzinger had this to say about the prospect:

What would you say if someone came along and said, “Hey, we want to genetically engineer mentally retarded human infants! For reasons of scientific progress we need infants with certain cognitive and emotional deficits in order to study their postnatal psychological development — we urgently need some funding for this important and innovative kind of research!” You would certainly think this was not only an absurd and appalling but also a dangerous idea. It would hopefully not pass any ethics committee in the democratic world. However, what today’s ethics committees don’t see is how the first machines satisfying a minimally sufficient set of constraints for conscious experience could be just like such mentally retarded infants. They would suffer from all kinds of functional and representational deficits too. But they would now also subjectively experience those deficits. In addition, they would have no political lobby — no representatives in any ethics committee.

Advertisement

But can software actually suffer? Sandberg said it’s difficult to know at this point, but he suggested that we might want to be safe rather than sorry.

“Perhaps it would be best to assume that any emulated system could have the same mental processes as what you're trying to emulate,” he said.

Advertisement

Virtual lab animal ethics

But Sandberg argued that all is not lost; he made the case that we can be moral when making brains. We just have to be smart — and compassionate — about it.

Advertisement

Future scientists should ameliorate virtual suffering in their subjects and work to ensure a high quality of life. For example, Sandberg proposed that virtual mice be given virtual painkillers. We’ll also have to consider the ethics of euthanizing conscious software programs and the potential harm imposed by death and the cessation of experiences.

It might also come to our attention that Second Life-like environments are too boring, requiring us to scale-up the VR accordingly.

Advertisement

As an aside, any emulated brain will need to be endowed with an emulated body situated within a simulated environment. The purpose of the brain is to present us with a model of the world, and it does so by drawing information from the senses. So, without a body and an environment, an emulated brain would not be able to function properly.

Advertisement

Human Emulations

And then there’s the issue of building a human brain inside a computer — a development that will introduce an entire battery of questions and issues.

Advertisement

For example, would we believe that an emulated human brain is conscious? And would it have rights?

It’s conceivable that, without the proper foresight and necessary prescriptions, a successful human emulation will be considered a non-entity — a non-person devoid of any legal protections and rights. By consequence, it could be subject to destructive editing and loss of (virtual) bodily autonomy.

Advertisement

But even if it did have rights, there are still potential risks. It’s handling could be flawed, or it could be emotionally distressed.

“Emulations may be rights holders, yet have existences not worth experiencing or be unable to express its wishes,” said Sandberg. “And when should we pull the plug? Or would we store it indefinitely?”

Advertisement

Another issue is time-rate rights. Does a human emulation have the right to live in real-time, so that it can interact properly with non-digital society?

Advertisement

The other thing to consider is identity and intellectual property rights. Emulations could lack privacy, and they’d be subject to copying, instant erasure, and editing — an have no guarantee of self-contained embodiment. Digital minds could also be copied illegally and bootlegged. Issues may also emerge about ownership over brain scans.

“We need to do some tricks here,” concluded Sandberg, “We have a chance to get to the future in a moral way.”

Advertisement

Preparing For the Future

Sandberg is totally on the right track here. Foresight is key. We can’t just hope to resolve these issues after the fact. We’re talking about creating moral agents; if their suffering can be averted, then let’s do it.

Advertisement

But ethics is just a starting point. Laws need to be enacted so that our moral sensibilities can be enforced. And indeed, the time is coming when when a piece of software will cease to be an object of inquiry and instead transform into a subject that deserves moral consideration, and by virtue of this, legal protection.

Back in 2010, I gave a presentation on this topic at the H+ Summit at Harvard University. To get the conversation started, I proposed that the following rights be afforded to fully conscious human and human-like emulations:

  • The right to not be shut down against one’s will
  • The right to not be experimented upon
  • The right to have full and unhindered access to one’s own source code
  • The right to not have one’s source code manipulated against their will
  • The right to copy (or not copy) oneself
  • The right to privacy (namely the right to conceal one’s own internal mental states)
  • The right of self-determination

Looking back, this list could use some add-ons and refinements. For example, I’d like to include Sandberg’s idea of time-rate rights.

Advertisement

But I still agree with the general principle behind the list. And what’s more, it’s an issue that will undoubtedly carry over to mind uploads and robots running either brain emulations or sophisticated artificial general intelligence programming.

Eventually, we’ll also have to include some negative rights to mitigate certain risks, like transcending uploads, or the enslaving of one's own copies.

Advertisement

The point is to get to the future in a moral way.

[Images: carlos castilla/Shutterstock; Katie Zhuang/Nicolesis Labs/Andrea Danti/Shutterstock]