Skip to content
Artificial Intelligence

The Rest of Big Tech Piles in to Take the Pentagon Deal That Anthropic Wouldn’t

Four new companies have agreed to let the U.S. military use their AI tech for classified work.
By

Reading time 3 minutes

Comments (1)

While Anthropic took a stand earlier this year to limit how its AI tech is used in classified military settings, it seems like the rest of Silicon Valley isn’t too concerned. 

Four more tech companies have struck deals with the U.S. Department of Defense (DoD) to deploy their advanced AI capabilities on classified military networks for “lawful operational use.”

Microsoft, Nvidia, Amazon Web Services, and startup Reflection AI were named as the latest companies to sign agreements with the Pentagon, according to a DoD press release. They join SpaceX, OpenAI, and Google, bringing the total number of AI companies participating in classified military work to seven.

“These agreements accelerate the transformation toward establishing the United States military as an AI-first fighting force and will strengthen our warfighters’ ability to maintain decision superiority across all domains of warfare,” the press release reads. 

The press release added that integrating advanced AI systems into classified networks will “streamline data synthesis, elevate situational understanding, and augment warfighter decision-making in complex operational environments.”

“For more than a decade, AWS has been committed to supporting our nation’s military and ensuring that our warfighters and defense partners have access to the best technology at the best value,” AWS Spokesperson Tim Barrett told Gizmodo in an emailed statement. “We look forward to continuing to support the Department of War’s modernization efforts, building AI solutions that help them accomplish their critical missions.”

Microsoft, Nvidia, and Reflection AI did not immediately respond to requests for comment from Gizmodo.

The deals come as concerns grow around the use of AI for surveillance and military applications.

Earlier this year, Anthropic, which at the time was the only major AI company working with the Pentagon on classified systems, reportedly hit a wall in negotiations with the DoD after officials pushed for language allowing Anthropic’s technology to be used for “any lawful purpose.” The biggest sticking points involved potential uses tied to domestic surveillance and autonomous weapons systems. In both situations, there are arguments that AI could already be legally used for those purposes. There’s also the fact that laws and courts’ interpretations of those laws change all the time, and the U.S. has very little legislation that was written with AI in mind.

After those talks reportedly fell apart, the Trump administration designated the company as a supply chain risk. Anthropic later filed two lawsuits against the Defense Department in response. Since then, however, President Donald Trump has said his administration has had “some very good talks” with Anthropic and suggested a future agreement restoring the company’s access to Pentagon work could still be “possible.”

Anthropic’s most advanced AI model, Mythos, has further complicated matters. The model, which has only been released to a select number of organizations, is reportedly already being tested by the National Security Agency (NSA) to help identify cybersecurity vulnerabilities in widely used software, including Microsoft products.

Still, just yesterday, Defense Secretary Pete Hegseth said Anthropic was being run by an “ideological lunatic who shouldn’t have sole decision-making over what we do” during a Senate Armed Services Committee hearing. When asked if “there will always be a human in the loop,” Hegseth avoided giving a direct answer and stuck with “We follow the law and humans make decisions.” He added that AI is not currently “making lethal decisions.”

The controversy has left some tech companies trying to straddle the fence.

Google struck its agreement earlier this week despite more than 600 employees, including directors and vice presidents, signing a letter that urged CEO Sundar Pichai to refuse to allow Google’s AI models to be used in classified military settings.

Meanwhile, in the blog post announcing its deal, OpenAI said it maintained control over its “safety stack” and prohibited the use of its AI for mass domestic surveillance or directing lethal autonomous weapons systems.

The Information reports that Google’s agreement includes similar language, but also states that the company “does not confer any right to control or veto lawful Government operational decision-making.”

Explore more on these topics

Share this story

Sign up for our newsletters

Subscribe and interact with our community, get up to date with our customised Newsletters and much more.