A tech industry leader and military AI experts testifying before the House Armed Services Committee on Tuesday said the US Department of Defense urgently needs to streamline its lagging data collection practices if it wants to maintain its lead in the global AI arms. Scale AI CEO Alexandr Wang, who was among three witnesses testifying Tuesday, told the lawmakers he believed “data is the ammunition in an AI war.”
The hearing explored the top barriers believed to be preventing the DoD from rapidly adopting and deploying AI on the battlefield amid a surge in private-sector AI hype. Wang, whose firm was awarded a $250 million contract by the DoD last year, dinged what he views as the military’s inability to properly use the massive torrents of data it generates from its machines and weapons system to train its own powerful military AI models.
“DoD lacks the proper data retention and management systems to operationalize it,” Wang testified. “Within the DoD, much of our key AI asset—our data—is being wasted every day.”
The tech executive called on the military to create a centralized data repository where it can effectively store and use what he estimates is around 22 terabytes of military data generated every day. Failure to do so, he warned, risks ceding ground to China during the “dawn of a new era of warfare.”
“The race to global AI leadership is well underway.”
Wang called on the DoD to increase its share of spending on AI efforts to keep pace with China which he said was spending roughly ten times more of its overall military budget on AI advancement. The US military’s practice of spending nearly unfathomable sums of cash on tanks and planes, some of which don’t even work, risks leaving the US “stuck in an innovator’s dilemma,” while its adversaries have a clear runway to invest in future, advanced AI-enabled weapons systems.
“While we are making sense of this technology and conceptualizing a framework for how to use it, Chinese leaders are actively working to use AI to tighten their grip domestically and expand their reach globally,” Wang said “AI is China’s Apollo project.”
The Department of Defense did not immediately respond to Gizmodo’s request for comment.
Klon Kitchen, a nonresident senior fellow at American Enterprise Institute, echoed that sentiment and said he believes China currently has the edge over the US in terms of raw data collection due in no small part to the state’s pervasive use of facial recognition and other biometric surveillance technology. In this case, though, more data isn’t necessarily better. Klon noted the strong-handed methods reported used by the Chinese government to obtain that data make it subject to bias, which could lead to ill-trained military AI models.
Klon contrasted that approach to US tech firms’ recent practice of scraping the open internet for public data to train large language models like OpenAI’s ChatGPT, offering a better method for gathering large swaths of data. At the same time, critics including authors and artists have pushed back against these same systems for using their works without obtaining permission or consent.
“The Chinese government’s political fragility, strict content controls, and oppression of its own people will compromise much of the data it collects, diluting its utility and ultimately limiting the development of Chinese AI,” Kitchen said.
Nowhere during the hearing was there any meaningful discussion of whether or not the military should continue its investment in AI-enabled weapons systems in the first place.
The Pentagon has invested in AI for decades and has recently said it expects autonomous and semi-autonomous weapons systems will play an “increasing role” in wars of the future. Human rights groups like the Stop Killer Robots Campaign and Human Rights Watch have spent years urging governments around the world to put in place restrictions or sign formal treaties banning these systems or limiting their use over fears they could pose an existential risk to humanity.
Nations, including the US, don’t seem all that interested in committing to those de-escalation efforts. Israel Defence Forces are reportedly already using AI systems to select targets for air strikes. A fully autonomous drone operating without human oversight, meanwhile, has already been deployed on a battlefield during a 2020 military skirmish in Libya, according to a UN report.
The Pentagon, for its part, has released its own AI ethics principles requiring all its tools to be “responsible, equitable, traceable, reliable, and governable.” During the hearing Tuesday, DataRobot Global AI Ethicist Dr. Haniyeh Mahmoudian commended the Pentagon’s guiding principles but said it remains to be seen how well they are implemented in practice.
“In recent years insufficient scrutiny of AI and AI systems, coupled with a limited comprehension of AI’s potential adverse effects have led to numerous instances where AI ended up harming individuals and groups it was designed to help,” Mahmoudian said. “Consideration of AI ethics have often been relegated to secondary thought when it comes to building and deploying AI systems.”