Skip to content
Artificial Intelligence

Anthropic Can’t Cover Up Its Claude Code Leak Fast Enough

The blueprints are public now.
By

Reading time 2 minutes

Comments (6)

Most companies are extremely protective of their planned product releases, using internal code names and requiring journalists to agree to embargoes before revealing details. Anthropic has inadvertently chosen a new strategy: have all of your plans leak due to basic security missteps with zero control over when and how they’re made public.

On Tuesday, source code from Claude Code, Anthropic’s popular AI coding assistant, was discovered in a publicly accessible database. In it, in addition to details on how Claude Code handles API requests and tokens, were details for features that have yet to be announced by Anthropic. That included a “Tamagotchi” style virtual pet, as Gizmodo reported. It also contained details on an always-on version of the AI agent, according to a report from The Information.

Named Kairos, the apparent planned persistent agent would operate in the background 24/7, autonomously operating on behalf of the user—basically making Claude into something closer to the ever-popular, open-source OpenClaw AI agent. In addition to acting proactively on behalf of a user, Kairos apparently has a feature called “autoDream” that consolidates and updates its internal memories overnight.

The reveal has the AI-obsessed online crowd pretty excited, but Anthropic seems significantly less thrilled about the whole situation despite the fanfare. According to a report from the Wall Street Journal, the offices at the AI firm are in a total uproar as they scramble to cover up what was revealed by the leak. The company has reportedly used copyright takedown requests to remove more than 8,000 copies of the Claude Code source code, which had been published and forked ad infinitum on GitHub.

Anthropic is also apparently trying to work quickly to plug security holes. While the company insisted that the recent leak was the result of human error and not a breach of any kind, the Journal pointed out that the source code gives hackers and malicious actors the ability to probe and prod for potential exploits with a new level of access.

There’s also the fact that the AI space is a copycat business right now, and the leak gives Anthropic’s competitors a much clearer look at Claude Code’s operation, making it easier to potentially copy some of its functionality without the need to try to reverse engineer the underlying code. Anthropic’s training models and weights remain their own, so its secret sauce is still under lock and key, but its blueprints being made public does present the possibility that its competitors try to beat it to the punch.

It’s been suggested that the slew of leaks out of Anthropic lately—last month, the company’s plans for a new model called Mythos were discovered in a publicly accessible database—could be in some way strategic. Anthropic is reportedly eyeing an initial public offering later this year, and revealing what it has in the pipeline might generate more interest from prospective investors. But the sense of panic that seems to be coming from Anthropic in the wake of this latest leak suggests the company would really rather this information not be public. At least, not yet.

Share this story

Sign up for our newsletters

Subscribe and interact with our community, get up to date with our customised Newsletters and much more.