A former Pentagon official is warning that autonomous weapons would likely be uncontrollable in real-world situations thanks to design failures, hacking, and external manipulation. The answer, he says, is to always keep humans “in the loop.”
The new report, titled “Autonomous Weapons and Operational Risk,” was written by Paul Scharre, a director at the Center for a New American Security. Scharre used to work at the office of the Secretary of Defense where he helped the US military craft its policy on the use of unmanned and autonomous weapons. Once deployed, these future weapons would be capable of choosing and engaging targets of their own choosing, raising a host of legal, ethical, and moral questions. But as Scharre points out in the new report, “They also raise critically important considerations regarding safety and risk.”
As Scharre is careful to point out, there’s a difference between semi-autonomous and fully autonomous weapons. With semi-autonomous weapons, a human controller would stay “in the loop,” monitoring the activity of the weapon or weapons system. Should it begin to fail, the controller would just hit the kill switch. But with autonomous weapons, the damage that be could be inflicted before a human is capable of intervening is significantly greater. Scharre worries that these systems are prone to design failures, hacking, spoofing, and manipulation by the enemy.
Scharre paints the potential consequences in grim terms:
In the most extreme case, an autonomous weapon could continue engaging inappropriate targets until it exhausts its magazine, potentially over a wide area. If the failure mode is replicated in other autonomous weapons of the same type, a military could face the disturbing prospect of large numbers of autonomous weapons failing simultaneously, with potentially catastrophic consequences.
From an operational standpoint, autonomous weapons pose a novel risk of mass fratricide, with large numbers of weapons turning on friendly forces. This could be because of hacking, enemy behavioral manipulation, unexpected interactions with the environment, or simple malfunctions or software errors. Moreover, as the complexity of the system increases, it becomes increasingly difficult to verify the system’s behavior under all possible conditions; the number of potential interactions within the system and with its environment is simply too large.
So that sounds like the makings of a most horrific dystopian sci-fi movie. However, Scharre believes that some of these risks can be mitigated and reduced, but the risk of accidents “never can be entirely eliminated.”
We’re still many years away from seeing fully autonomous systems deployed in the field, but it’s not too early to start thinking about the potential risks—and benefits. It has been argued, for example, that autonomous systems could reduce casualties and suffering on the battlefield. That may very well be the case, but as Scharre and his team at the Center for a New American Security point out, the risks are serious, indeed.
Email the author at email@example.com and follow him @dvorsky.