Skip to content
Artificial Intelligence

Trump Says He Discussed ‘Standard’ AI Safety Guardrails With Xi. There’s No Such Thing

Maybe there should be.
By

Reading time 2 minutes

Comments (0)

Donald Trump told reporters on Friday that he and Chinese President Xi Jinping talked about AI safety during their two-day summit in Beijing, according to Bloomberg. What safeguards, exactly? “Standard guardrails that we talk about all the time,” he said.

Great! Except… there are no standard guardrails.

In the United States, AI regulation is a mess. There is a patchwork of state-level laws that take steps to restrict some of the more harmful and dangerous uses of AI. California has been a leader in this space and passed a major AI safety bill last year, though one can quibble with just how effective the legislation is. But at the federal level, very little meaningful action has been taken.

Last year, Trump revealed his “AI Action Plan,” which offered up 90 policy recommendations that amount to a whole lot of nothing. It could largely be boiled down to “we should do AI.” To the extent there were any guardrails involved, Trump called on AI models to be “unbiased,” which, in his mind, means that they don’t say anything he doesn’t like. To that end, his most recent idea for AI regulation includes letting his goons review AI models before they are released to the public to make sure they comply with the administration’s desires.

Meanwhile, the administration has mostly tried to quash attempts to restrict AI in any way. It has repeatedly called for a “light touch” approach to regulating the industry and has repeatedly tried to stop states from passing their own laws designed to keep AI models in line. The National Institute of Science and Technology has released a broad framework for considering risks associated with AI, but it’s entirely voluntary.

China has at least put some thought into the matter, publishing multiple AI frameworks, including its most recent one that considers risks the technology poses to labor and the potential for misuse in weapons. It also developed some risk mitigation measures that are meant to help regulators in specific sectors consider dangers and develop guides for averting negative outcomes. But even those are currently optional and not binding governance.

The meetings between officials from the US and China might produce some more specifics that would approach “standard guardrails,” as Trump describes them. But thus far, it’s been a lot more broad, generic talk than anything specific. Treasury Secretary Scott Bessent has reportedly been involved in those talks, per Reuters. “What we don’t want to do is stifle innovation. So our responsibility is ​to come up with the highest performance calculus where we can get ⁠the most innovation and the highest level of safety,” he said, which is meaningless.

“We want unlimited growth that is also safe.” Yeah, great idea. Our best minds are on it, clearly. Get ready for the AI Should Do All of the Good Stuff and None of the Bad Stuff Act; it’ll solve everything.

Share this story

Sign up for our newsletters

Subscribe and interact with our community, get up to date with our customised Newsletters and much more.