Imagine for a minute that you survive a terrible accident, and lose function of your right arm. You receive a brain implant able to interpret your brain’s neural activity and reroute commands to a robotic arm. Then one day, someone hacks that chip, sending malicious commands to the robotic arm. It’s a biological invasion of privacy in which you are suddenly no longer in control.
A future in which we can simply download karate skills a la The Matrix or use computers to restore functionality to damaged limbs seems like the stuff of a far-off future, but that future is inching closer to the present with each passing day. Early research has had success using brain-computer interfaces (BCIs) to move prosthetic limbs and treat mental illness. DARPA is exploring how to use the technology to make soldiers learn faster. Companies like Elon Musk’s Neuralink want to use it to read your mind. Already, researchers can interpret basic information about what a person is thinking simply by reading scans of their brain activity from an fMRI.
As incredible as the potential of these technologies are, they also present serious ethical conundrums that could one day compromise our privacy, identity, agency, and equality. In an essay published Thursday in Nature, a group of 27 neuroscientists, neurotechnologists, clinicians, ethicists and machine-intelligence engineers spell out their concerns.
“We are on a path to a world in which it will be possible to decode people’s mental processes and directly manipulate the brain mechanisms underlying their intentions, emotions and decisions; where individuals could communicate with others simply by thinking; and where powerful computational systems linked directly to people’s brains aid their interactions with the world such that their mental and physical abilities are greatly enhanced,” the researchers write.
This, they claim, will mean remarkable power to change the human experience for the better. But such technology may also come with tradeoffs that are hard to swallow.
“The technology could also exacerbate social inequalities and offer corporations, hackers, governments or anyone else new ways to exploit and manipulate people,” they write. “And it could profoundly alter some core human characteristics: private mental life, individual agency and an understanding of individuals as entities bound by their bodies.”
The aim of the essay is to catalyze the development of stronger ethics guidelines to govern technologies that interact with the human brain. The essay focuses on four areas of concern:
- Privacy: “Algorithms that are used to target advertising, calculate insurance premiums or match potential partners will be considerably more powerful if they draw on neural information — for instance, activity patterns from neurons associated with certain states of attention,” the researchers write. “And neural devices connected to the Internet open up the possibility of individuals or organizations (hackers, corporations or government agencies) tracking or even manipulating an individual’s mental experience.” The sharing of neural data, they argue, should be automatically opt-out, rather than opt-in as, say, Facebook is. Technologies like blockchain could help protect user privacy, too.
- Agency and identity: In some cases, people who have received brain chip implants to treat mental health problems and Parkinson’s disease symptoms have reported feeling an altered sense of identity. “People could end up behaving in ways that they struggle to claim as their own, if machine learning and brain-interfacing devices enable faster translation between an intention and an action, perhaps by using an ‘auto-complete’ or ‘auto-correct’ function,” the researchers write. “If people can control devices through their thoughts across great distances, or if several brains are wired to work collaboratively, our understanding of who we are and where we are acting will be disrupted.” In light of this, they argue, treaties like the 1948 Universal Declaration of Human Rights need to include clauses to protect identity and enforce education about the potential cognitive and emotional effects of neurotechnologies.
- Augmentation: “The pressure to adopt enhancing neurotechnologies, such as those that allow people to radically expand their endurance or sensory or mental capacities, is likely to change societal norms, raise issues of equitable access and generate new forms of discrimination,” the essay reads. Like all new technologies, a disparity of access could lead to an even wider chasm between those who can access it and those who cannot.
- Bias: We often view algorithms as impartial judges devoid of human bias. But algorithms are created by people, and that means they sometimes inherit our biases, too. To wit: last year a ProPublica investigation found algorithms used by US law-enforcement agencies wrongly predict that black defendants are more likely to reoffend than white defendants with a similar record. “Such biases could become embedded in neural devices,” the researchers write. “We advocate that countermeasures to combat bias become the norm for machine learning.”
In other technologies, we have already begun to see examples of the privacy issues of a digital world creeping into our bodies.
A few years ago, in a move that at the time seemed rooted in incredible paranoia, former Vice President Dick Cheney opted to remove the wireless functionality of his pacemaker, fearing a hack. It turned out he was instead incredibly prescient. This year, a report found pacemakers are vulnerable to literally thousands of bugs. Last year, Johnson & Johnson warned diabetic patients about a defect in one of its insulin pumps that could also theoretically allow an attack.
Hacking aside, even the biological data we voluntarily share can have troublesome unforseen consequences. In February, data from man’s pacemaker helped put him in prison for arson. Data from Fitbits has similarly been used in court to prove personal injury claims and undermine a woman’s rape claim.
From just a study of people’s movement derived from their smartphone’s activity monitor, one 2017 study was able to diagnose early signs of cognitive impairment associated with Alzheimer’s disease. Imagine what a direct line into the brain might reveal.
There are a lot of things that need to happen before neurotechnologies are ready for the mainstream. For one, most effective brain-computer interface technologies currently require brain surgery. But companies like Facebook and OpenWater are working on developing non-invasive, consumer-friendly versions of these technologies. And while they might not get there in the next few years (as both companies have proposed), they probably will get there eventually.
“The possible clinical and societal benefits of neurotechnologies are vast,” the essay concluded. “To reap them, we must guide their development in a way that respects, protects and enables what is best in humanity.”