When delegates from 50 countries met in the Netherlands this week to discuss the future of military artificial intelligence, human rights activists and non-proliferation experts saw an opportunity. For years, rights groups have urged nations to restrict the development of AI weapons and sign a legally binding treaty to restrict the use of them over fears their unrestricted development could mirror last century’s nuclear arms race. Instead, the results of what could have been a historic summit were only “feeble” window dressing, the rights groups said.
After two days of in-depth talks, panels, and presentations produced by around 2,500 AI experts and industry leaders, the REAIM (get it?) summit ended in a non-legally binding “call to action” over the responsible development, deployment and use of military AI. The attendees also agreed to establish a “Global Commission on AI.” That might sound lofty, but in reality, those initiatives are limited to “raise awareness” about how the technology can be manufactured responsibly. Meaningful talks of actually reducing or limiting AI weapons were essentially off the table.
Stop Killer Robots Campaign, one of the leading rights groups advocating against AI in warfare, told Gizmodo the call action offered a “vague and incorrect vision” of military use of AI without any reason for clarity on rules or limitations. Safe Ground, an Australian rights group, called the entire summit a “missed opportunity.”
At the same time the United States, which is both the world leader in AI weapons systems and historically one of the leading voices against an international AI weapons treaty, revealed a 12 point political declaration outlining its “responsible” autonomous systems strategy. The declaration, which comes just weeks after a controversial new Department of Defense directive on AI, says all AI systems should adhere to international human rights laws and have “appropriate levels of human judgment.” Though State Department officials triumphantly advertised the declaration as a pivotal step forward, rights groups fighting to limit the AI weapons system said it’s a complete disaster.
“Now is not the time for countries to tinker with flawed political declarations,” Human Rights Watch Arms Advocacy Director Mary Wareham said in a tweet. Stop Killer Robots Government Relations Manager Ousman Noor went further and called the declaration “the most backwards position seen from any state, in years.”
“This Declaration falls drastically short of the international framework that the majority of states within UN discussions have called for,” Stop Killer Robots said in a statement. “It does not see the need for legally binding rules, and instead permits the development and use of Autonomous Weapons Systems, absent lines of acceptability.”
For AI military skeptics, the first-of-its-kind summit was actually seen as a step in the wrong direction. Prior to the summit, a majority of the 125 states represented in the U.N.’s Convention on Certain Conventional Weapons expressed interest in new laws essentially banning autonomous weapons development during a conference last year. UN Secretary-General António Guterres released a statement around the same time saying such systems should be prohibited under international law. Those efforts failed largely due to the U.S., China, and Russia, which are all in favor of the development of these weapons. The views of these three countries were previously outliers at the U.N. Now, under the new framework, it appears foregone that autonomous weapons systems are necessary and unavoidable.
One notable country not represented among the 50 or so nations at the REAIM summit? Russia, due to its ongoing war with Ukraine. Present or not, Russia and Ukraine were discussed throughout the summit as one of the potential testing grounds for new, fully autonomous military technology. Ukraine already reportedly uses semi-autonomous attack drones and Clearview AI’s facial recognition service to identify dead Russian troops.
Here’s some of the top highlights from the summit.