Skip to content
Artificial Intelligence

The Crypto Bros Want to Get Their Hands on Anthropic’s ‘Super Dangerous’ Model

Before it cracks cryptography entirely.
By

Reading time 2 minutes

Comments (3)

The crypto bros might be able to cut the line to get into any club in Miami, but they’re stuck trying to bribe the bouncer at Club Mythos. Anthropic’s new model, Claude Mythos, is currently only available in a limited capacity due to concerns that it may be a little too capable of exploiting cybersecurity vulnerabilities. Anthropic has given access to only a select few partners, and the cryptocurrency community would really like to get on the list.

According to the Information, a number of crypto exchanges, including Coinbase, have been communicating with Anthropic in hopes of getting their hands on Mythos. The exchange isn’t alone in hoping that it can tap AI tools to help bolster security. Binance also reportedly has been using AI models (including Claude Opus) to test its systems and try to poke holes in potential vulnerabilities before a bad actor does. Crypto custodian firm Fireblocks told The Information it also uses Anthropic’s publicly available model for pentesting, and claims that the model has spotted issues that human testers missed.

But to date, it seems none of the crypto companies have managed to get the Mythos treatment. Anthropic has said the platform is capable of spotting cybersecurity issues that evade the eyes of “all but the most skilled humans,” and claimed to have used the model to spot security flaws that have been hiding in legacy systems for nearly three decades without detection (though, for what it’s worth, some researchers were able to replicate the flaw detection with less powerful models).

The crypto space has a very obvious reason for wanting to get in on Mythos’ offerings: they sit on billions of dollars worth of digital assets and are surprisingly vulnerable to attack. Coinbase has been hit with several high-profile cybersecurity incidents over the years, including one last year that reportedly exposed sensitive customer data. Anthropic is currently holding back Mythos because it believes the model could be used to exploit major security vulnerabilities in platforms and online infrastructure at scale. Crypto is a pretty logical target for anyone who wants to use an AI model for malicious means.

If the crypto bros keep getting bounced by Anthropic, maybe they can pull up to Club OpenAI. Per Bloomberg, the company is doing its own limited release of a new cybersecurity tool, which it definitely did not hastily announce to avoid getting left behind by its rival’s hype train. Odds are the cover charge is much cheaper over there.

Share this story

Sign up for our newsletters

Subscribe and interact with our community, get up to date with our customised Newsletters and much more.