The National Security Commission on Artificial Intelligence, a congressional advisory panel—helmed by former Google CEO Eric Schmidt—has mulled whether or not the U.S. should deploy artificially intelligent autonomous weapons, and, after taking into account the numerous reasons it would be a terrible idea, decided that at least toying around with the idea is “moral imperative.”
Per Reuters, the two-day panel, which was chaired by Schmidt and vice chaired by former Deputy Secretary of Defense Robert Work, opposed the U.S. joining an international coalition of at least 30 countries that have urged a treaty to ban the development or use of autonomous weapons. Instead, the panel advised Congress to keep their options open.
Critics have long pointed to the inherent dangers of AI-controlled weaponry, which include everything from glitchy or trigger-happy systems kicking off violent skirmishes that could turn into bigger conflicts, the possibility such systems could be acquired by terrorists or subverted against their masters by hackers, and that robot tanks and drones could decide to massacre helpless civilians. he Campaign to Stop Killer Robots lists dozens of international organizations as members and warns that allowing machines to decide “who lives and dies, without further human intervention” would “cross a moral threshold.”
The congressional panel instead concluded that killer robots potentially being really, really good at killing is actually a reason not to rule them out: the logic goes that perhaps autonomous weapons could be much more discriminating in their target selection and thus somehow kill fewer people. The panel suggested that an anti-proliferation treaty may also be more realistic than an outright ban.
Its vice-chairman, Robert Work, a former deputy secretary of defense, said autonomous weapons are expected to make fewer mistakes than humans do in battle, leading to reduced casualties or skirmishes caused by target misidentification.
“It is a moral imperative to at least pursue this hypothesis,” he said.
The only thing that the panel ruled out entirely is the possibility of giving AI any involvement in the decision in whether or not to launch a nuclear weapon (which could obviously usher in an apocalypse).
The panel’s recommendations include integrating AI into intelligence gathering, investing $32 billion in federal money into AI research, and creating a special unit focused on digital issues akin to the Army’s Medical Corps, according to Reuters. Its recommendations aren’t binding, and Congress is under no obligation to act on them when the panel’s report is submitted in March.
National militaries have moved ahead with building autonomous weaponry, regardless of international pressure to not do that. In November 2020, UK defense chief General Nick Carter estimated that its military could have up to 30,000 robots working alongside up to 90,000 troops by the year 2030, though Carter specified that humans will retain control of final decisions for robots to open fire.
The U.S. military has been testing autonomous tanks, but it’s similarly assured the public that it will abide by “ethical standards” that require fleshy operators be able to “exercise appropriate levels of human judgment over the use of force.” In February 2020, the Defense Department released guiding principles for autonomous systems it had developed in conjunction with “leading AI experts,” including that personnel “exercise appropriate levels of judgment and care” while developing and deploying AI systems, attempt to avoid unintended bias in AI systems, be transparent in how the systems are developed, and ensure any autonomous systems are reliable and governable. The U.S. military has been particularly wary of falling behind in any potential arms race in autonomous weaponry, which it says is being pursued by Russia and China.
The “focus on the need to compete with similar investments made by China and Russia … only serves to encourage arms races,” the Campaign to Stop Killer Robot’s Mary Wareham told Reuters.