This week, an open letter was presented at an AI conference in Buenos Aires, Argentina calling for a ban on autonomous weapons. The letter has been signed by nearly 14,000 prominent thinkers and leading robotics researchers, but not everyone agrees with its premise. Here’s the case against a ban on killer robots, and why it’s misguided.
In his IEEE Spectrum article, “We Should Not Ban ‘Killer Robots,’ and Here’s Why,” Evan Ackerman argues that we’re not going to be able to stop the onset of autonomous armed robots, and that the real question we should be asking is, “Could autonomous armed robots perform better than armed humans in combat, resulting in fewer casualties on both sides?” As Ackerman writes:
What we really need, then, is a way of making autonomous armed robots ethical, because we’re not going to be able to prevent them from existing. In fact, the most significant assumption that this letter makes is that armed autonomous robots are inherently more likely to cause unintended destruction and death than armed autonomous humans are. This may or may not be the case right now, and either way, I genuinely believe that it won’t be the case in the future, perhaps the very near future. I think that it will be possible for robots to be as good (or better) at identifying hostile enemy combatants as humans, since there are rules that can be followed (called Rules of Engagement, for an example see page 27 of this) to determine whether or not using force is justified. For example, does your target have a weapon? Is that weapon pointed at you? Has the weapon been fired? Have you been hit? These are all things that a robot can determine using any number of sensors that currently exist.
Ackerman likens the introduction of autonomous killing machines to self-driving cars:
Expecting an autonomous car to keep you safe 100 percent of the time is unrealistic. But, if an autonomous car is (say) 5 percent more likely to keep you safe than if you were driving yourself, you’d still be much better off letting it take over. Autonomous cars, by the way, will likely be much safer than that, and it’s entirely possible that autonomous armed robots will be, too. And if autonomous armed robots really do have at least the potential reduce casualties, aren’t we then ethically obligated to develop them?
In response to the argument that robotic weapons will make it easier to kill, he says “that’s been true ever since someone realized that they could throw a rock at someone else instead of walking up and punching them.” It’s not technology that’s problem, argues Ackerman, it’s the people who choose to use them, adding that “blaming technology for the decisions that we make involving it is at best counterproductive and at worst nonsensical.” He concludes by saying:
I’m not in favor of robots killing people. If this letter was about that, I’d totally sign it. But that’s not what it’s about; it’s about the potential value of armed autonomous robots, and I believe that this is something that we need to have a reasoned discussion about rather than banning.
Discriminate vs. Indiscriminate Killing
Ackerman obviously brings up some very important points. Yes, there’s something unquestioningly disquieting about the prospect of robots killing humans, but why should this be any less disturbing than humans killing humans? Perhaps counterintuitively, robo-soliders might actually make war safer.
But a case can be made that it’s important to keep humans within the “moral” loop, and that it’s wholly appropriate—and even necessary—for humans to feel remorse after killing people. On the other hand, and as Ackerman points out in his article, humans have been killing other humans remotely, or impersonally, for quite some time.
As for Ackerman’s argument that we shouldn’t “blame” technology, Patrick Lin, the director of the Ethics + Emerging Sciences Group at California Polytechnic State University, disagrees that technology itself can’t be the problem.
“Sometimes, you can’t separate the technology from its use, and this can make a technology unethical,” he told io9. “For instance, nukes are inherently indiscriminate and inhumane, and there’s no morally defensible use of them. It’s not clear that this is the case with killer robots, but it’s possible—I think there needs to be more investigation.”
From a moral perspective, Lin says he’s sympathetic to the ban on killer robots. But like Ackerman, he says it’s hard to imagine how that can happen.
“Any AI research could be co-opted into the service of war, from autonomous cars to smarter chat-bots,” he says. “It’s a short hop from innocent research to weaponization.”
Beyond Meaningful Human Control
Importantly, Ackerman failed to address one very important aspect as it pertains to the proposed moratorium. As the letter clearly points out, the proposed ban is on offensive autonomous weapons beyond meaningful human control. That’s the difference that makes the difference. Ackerman only considered AI and robots that work within the confines of their ethical programming, and scenarios in which humans are still in the loop from a practical standpoint.
But there’s the potential for future AI and robotics systems to work outside these confines. Once our technologies enter into the realm of greater-than-human intelligence, humans will be forced to sit on the sidelines and watch (as best they can) as one AI systems works to outwit a rival AI system. They’ll work at speeds and levels of complexity beyond human comprehension and control. It’s difficult, if not impossible, to predict how the actions taken by these combatting systems will serve human interests, or even those of the individual nations involved.
This is exactly the kind of AI arms race and potentially end-game scenario that the signatories of the open letter—myself included—are seeking to avoid.
Read Ackerman’s entire post at IEEE Spectrum.