Image: Ghost in the Shell 2: Innocence

Rapid developments in brain-machine interfacing and neuroprosthetics are revolutionizing the way we treat paralyzed people, but the same technologies could eventually be put to more generalized use—a development that’ll turn many of us into veritable cyborgs. Before we get to that point, however, we’ll need to make sure these neural devices are safe, secure, and as hacker-proof as possible.

In anticipation of our cyborg future, researchers from the Wyss Center for Bio and Neuroengineering in Geneva Switzerland have published a new Policy Forum paper in Science titled, “Help, hope, and hype: Ethical dimensions of neuroprosthetics.” The intent of the authors is to raise awareness of this new breed of neurotechnologies, and the various ways they can be abused. Importantly, the researchers come up with some ways to mitigate potential problems before they arise.


No doubt, work in neurotech is proceeding apace. Researchers are developing brain-machine interfaces (BMIs) that are enabling quadriplegics to regain use of their hands and fingers, amputees to move prosthetic limbs by simply using their thoughts, and patients with degenerative diseases to spell out messages with their minds. Incredibly, paraplegics wearing robotic exosuits can now kick soccer balls, and monkeys have started to control wheelchairs with their mind. Brain-to-brain communication interfaces (BBIs) are allowing gamers to control the movements of other players and play a game of 20 questions without uttering a word. With each passing breakthrough, we’re learning a little bit more about the brain and how it works. Most importantly, these tools are giving agency and independence back to amputees and paralyzed individuals.

Time to shake hands with the future: Brain-controlled robots like the one above are starting to enter everyday life. (Image: Wyss Center)

But there’s also a dark side to these technologies. As Wyss Center Director John Donoghue points out in the new Policy Forum, serious ethical issues are emerging around this field, and it’s not too early to start thinking about ways in which neuroprosthetics and brain-machine interfaces might be abused.



“Although we still don’t fully understand how the brain works, we are moving closer to being able to reliably decode certain brain signals. We shouldn’t be complacent about what this could mean for society,” said Donoghue in a statement. “We must carefully consider the consequences of living alongside semi-intelligent brain-controlled machines and we should be ready with mechanisms to ensure their safe and ethical use.”

The Wyss Center is concerned that, as these neuro-devices increasingly enter into our worlds, the uses for these tools will increase in power and scope. Currently, BMIs are being used to pick up cups or type words on a screen, but eventually these devices could be used by an emergency worker to fix a dangerous gas leak, or a mother to pick up her crying baby.

A non-invasive electroencephalography (EEG) cap is for measuring brain activity on a study participant. (Image: Wyss Center)

Should something go wrong in these cases—like the gas worker’s semi-autonomous robot turning the wrong crank, or the mother dropping the baby— it’s important to ask where accountability begins and ends, and who’s to blame. Future laws will have to discern whether the manufacturer is responsible (e.g. a bug or glitch in the design) or the user (e.g. deliberate misuse or tampering with the product’s intended design). To mitigate these problems, the authors propose that any semi-autonomous system should include a form of “veto control,”—that is, an emergency stop that can be executed by the user to overcome deficiencies in the direct brain-machine interaction. If a prosthetic limb or remote-controlled started to do something the user didn’t intend, this kill switch would put an immediate halt to activities.

Other areas of concern include security and privacy, and the eventual need to protect any sensitive biological data that’s recorded by these systems. When BMIs are up-and-running, they collect a trove of neurological data, which is transmitted to a computer. This naturally poses privacy concerns, and the Wyss Center researchers are worried that this information could be stolen and misused.


“The protection of sensitive neuronal data from people with complete paralysis who use a BMI as their only means of communication, is particularly important,” said Niels Birbaumer, Senior Research Fellow at the Wyss Center. “Successful calibration of their BMI depends on brain responses to personal questions provided by the family (for example, “Your daughter’s name is Emily?”). Strict data protection must be applied to all people involved, this includes protecting the personal information asked in questions as well as the protection of neuronal data to ensure the device functions correctly.”

Frighteningly, the Wyss researchers also worry about someone hacking into a brain-connected device—an act that could literally threaten the life of the user. Known as “brainjacking,” it would involve the malicious manipulation of brain implants. Hackers could go in and control a person’s movements.


Possible solutions to these problems include data encryption, information hiding, network security, and open communication between manufacturers and users. It’ll be a challenge to implement many of these proposed measures, however, due to the lack of consistent standards across countries. But as the Wyss researchers point out, now’s an excellent time to start thinking about ways to improve coordination and industry standards.

“Some of the concerns the authors raise may someday amount to real problems, and so it is prudent to think about them a bit ahead of time,” said Adam Keiper, a Fellow at the Ethics and Public Policy Center and the editor of The New Atlantis, in an interview with Gizmodo. “But they aren’t major concerns now.”

Keiper, who wasn’t involved in the Policy Forum paper, is skeptical that anyone would want to hack in the BMI of a profoundly disabled person, or a brain-machine interface used for neurofeedback “brain-training” (i.e., programs that use non-invasive brain scanners, like EEGS, to train people to manage behaviors, reduce stress, meditate, etc). “What would a hacker get out of it?,” he asked. “So the concerns about security and privacy may matter in the future, but they do not matter yet.”



He adds that the concerns about BMIs and semi-autonomous robots are an interesting variation on questions currently being raised about robots—questions that “very smart lawyers will likely make fortunes sorting out,” he said. As to the proposed prescriptions, Keiper said most make sense, but in his view, a few are downright silly. “The authors say we should ‘encourage improved health literacy and neuro-literacy in the broader society’,” he said, “Give me a break.” Keiper is skeptical that the public will find any interest in these rather heady and arcane areas of inquiry.

But as Keiper admits, it’s often difficult to know when the time is right to start publicly airing ethical and policy concerns about emerging technologies. “There is always a risk of speaking out prematurely—as happened with the ‘nanoethicists’ of a decade ago, who, thinking that advanced nanotechnology would arrive imminently, tried to build an academic discipline out of their concerns,” he said. “In this case, I think the authors should be applauded for raising their concerns in a non-alarmist, relatively modest way.”


Indeed, the Wyss researchers are bringing up an important issue. Eventually, many of these technologies will spread their way into the mainstream, serving as enabling devices for those who are not disabled. Noninvasive BMIs could be used to create a kind of telekinetic connection to our environment, where we use our thoughts to turn on the lights or change the channels on the television. Eventually, these same technologies might even result in technologically-enabled telepathy. As the Wyss researchers aptly point out, the potential for abuse is nontrivial—and we’d best start thinking about it now.