The Future Is Here
We may earn a commission from links on this page

Palantir Among First Tech Firms to Promise White House They Won’t Use AI for Evil

Adobe, Nvidia, Salesforce, Stability, and more also claimed they will steward ‘trustworthy’ AI, if you can ever trust the companies themselves.

We may earn a commission from links on this page.
U.S. President Joe Biden delivers remarks to an audience of leaders from the International Longshore and Warehouse Union (ILWU) and the Pacific Maritime Association (PMA) during an event to congratulate them on finalizing a new labor contract in the State Dining Room at the White House on September 06, 2023 in Washington, DC.
President Joe Biden has focused on non-binding agreements with big tech over the use of AI, but the White House isn’t mentioning the existing military use of AI facilitated by some of these partner companies.
Photo: Chip Somodevilla (Getty Images)

The Biden administration is trying to take a paternalistic role in stewarding the development of AI for major tech firms. It’s not exactly leading from the front but is instead placing a gentle, reaffirming hand on the shoulders of big tech, telling them to be cautious and open about how they lay out the future of the transformative tech.

Some of the biggest tech firms have agreed to the White House’s voluntary commitment on ethical AI, including some companies that are already using AI to help militaries kill more effectively and to monitor citizens at home.


On Tuesday, the White House proclaimed that eight more big tech companies have accepted President Joe Biden’s guiding hand. These commitments include that companies will share safety and safeguarding information with other AI makers. They would have to share information with the public about their AI’s capability and limitations and use AI to “help address society’s greatest challenges.” Among the few tech companies to agree to the White House’s latest cooperative agreement is the defense contractor Palantir, a closed-door data analytics company known for its connections with spy agencies like the CIA and FBI as well as governments and militaries around the world.

The other seven companies to agree to the voluntary commitment include major product companies like Adobe, IBM, Nvidia, and Salesforce. In addition, several AI firms such as Cohere, Scale AI, and Stability have joined the likes of Microsoft, OpenAI, and Google in facilitating third-party testing and watermarking for their AI systems.


These vague agreements are relatively shallow, and they don’t make any mention of AI companies sharing what’s in their generative AI training data. These increasingly opaque AI models developed by many of the compliant companies are a sticking point for AI ethicists. The White House said in its press release the Biden administration is developing an executive order on AI to “protect American’s rights and safety,” but the release offered little to no details on what that entails.

Despite the executive branch’s lofty goals for safe, clean AI, Palantir is already one of the most-cited big tech firms for questions around tech ethics, or really the lack thereof. The data analytics company took the lead in creating the data systems used by the U.S. Immigrations and Customs Enforcement, which has only helped the agency spy on people in the U.S. and honeytrap undocumented immigrants. And that’s just the tip of the iceberg, as critics have called out Palantir for fueling racist predictive policing software.

Palantir CTO Shyam Sankar previously made comments during a Senate Armed Services Committee hearing that any kind of pause on AI development would mean that China could get the better of the U.S. in technological supremacy. He was adamant that the U.S. spend even more of its defense budget by investing even more money on “capabilities that will terrify our adversaries.”

Imagine the use of AI for information warfare, as Palantir CEO Alex Karp harped on during a February summit on AI-military tech. The company is already facilitating its data analytics software for battlefield targeting for the Ukrainian military, Karp reportedly said. Still, the CEO did mention that there needs to be “architecture that allows transparency on the data sources,” which should be “mandated by law.” Of course, Palantir hasn’t exactly been open about its own data for any of its many military contracts.


In an email statement to Gizmodo, Palantir USG president Akash Jain said Today, Palantir, along with other leading AI companies, made a set of voluntary commitments to advance effective and meaningful AI governance, which is essential for open competition and maintaining US leadership in innovation and technology.” The company did not respond to Gizmodo’s questions regarding its ongoing military and government AI contracts.

That’s not to say other big tech firms, including Google and Microsoft, haven’t had their own military contractor dealings, such as the latter’s awkward military-focused HoloLens project. Google had once been the lead on the military contract dubbed Project Maven, a U.S. Department of Defense program attempting to use AI to analyze people and potential targets from drone footage, without the need for human input. Google dropped that project after protests back in 2018, but in 2019 reports showed Palantir had picked up where Google left off.


So far, the Biden administration has focused on non-binding recommendations and other executive orders to try and police encroaching AI proliferation. White House Chief of Staff Jeff Zeints told Reuters the administration is “pulling every lever we have” to manage the risks of AI. Still, we’re nowhere close to seeing real AI regulation from Congress, but knowing the hand AI developers want to play in crafting any new law, there are little to no signs we’ll see real constraints placed on the development of privacy-demolishing and military-focused AI.

Update 09/12/23 at 4:06 p.m. ET: This post was updated to include a statement from Palantir USG President Akash Jain.