Dallas police used a “bomb robot” today to a kill a suspected gunman involved in the murder of five police officers and the wounding of seven others. The decision to kill the suspect using a robot armed with an explosive, was made after an hours-long standoff. Dallas police chief David Brown said, after negotiations “broke down,” the suspect and police officers exchanged gunfire.
“Other options would have exposed our officers in grave danger,” Brown said.
The decision to use a bomb robot is raising concerns about due process and how police adopt the use of new military technology. Although it’s not new for police agencies to use robots, typically used for non-lethal force, it is new for cops to use a robot to kill a suspect. As robots become more sophisticated and potentially automated, using them as weapons will spark tricky legal and ethical issues for law enforcement.
“As a legal matter, the choice of weapon in a decision to use lethal force does not change the constitutional calculus, which hinges on whether an individual poses an imminent threat to others, and whether the use of lethal force is reasonable under the circumstances,” Jay Stanley, a senior policy analyst at the ACLU, said in an email to Gizmodo.
But Stanley said the easy and relatively safe use of ground robots that can deploy deadly force could mean they could be overused: “Remote uses of force raise policy issues that should be carefully considered...and should remain confined to extraordinary situations,” he said.
Elizabeth Joh, law professor at University of California Davis, agrees and said just because something is legal, doesn’t mean it should be adopted.
“Should we send in a robot that has one purpose—to inflict death or serious bodily harm—if non-lethal alternative is available? Perhaps that wasn’t possible in Dallas, but that is a question police departments, who will adopt robots in the future, should address.”