Skip to content
Artificial Intelligence

Google Says It Found Evidence of Hackers Using AI to Discover a Zero-Day Vulnerability

The tech giant says this is the first time it has identified such a case.
By

Reading time 2 minutes

Comments (0)

As AI models continue to get more powerful, it’s not too surprising that some people are trying to use them for crime.

The Google Threat Intelligence Group said on Monday that it has identified, for the first time, a cybercrime group using a zero-day exploit that the company believes was discovered with the assistance of AI. A zero-day vulnerability refers to a major security flaw in software or hardware that is unknown to its developers, leaving them with “zero days” to patch it before attackers can exploit it.

The threat actor, which Google did not name but described as a “prominent” cybercrime group, was allegedly planning to use the flaw in a mass exploitation campaign. Google believes it prevented the exploit from being used.

According to the GTIG report, Google’s analysis of exploits tied to the campaign found a zero-day vulnerability built into a Python script. The exploit would have allowed hackers to bypass two-factor authentication on an unnamed but popular open-source, web-based system administration tool. Google noted that the hackers still would have needed valid user credentials for the exploit to work.

GTIG said it worked with the impacted vendor to disclose and address the security flaw.

The report goes on to say that Google has “high confidence,” based on the structure and content of the exploits, that the hackers likely used an AI model to help discover and weaponize the flaw.

“For example, the script contains an abundance of educational docstrings, including a hallucinated CVSS score, and uses a structured, textbook Pythonic format highly characteristic of LLMs training data (e.g., detailed help menus and the clean _C ANSI color class),” the report reads.

Google also notes that it does not believe its own Gemini model was used.

The news comes amid growing scrutiny over the cybersecurity threats posed by advanced AI models, especially following the limited release of Anthropic’s Mythos model. Anthropic has made Mythos available only to a select group of companies, organizations, and governments through a program meant to help them test and strengthen their cybersecurity.

The limited release caused enough of a stir that it prompted the Trump administration to consider dropping its beef with Anthropic and secure agreements with more AI companies to allow the government to review their models before public release.

Still, not everyone is convinced that Mythos is as big a deal as it has been made out to be.

In a blog post on Monday, Curl Lead Developer Daniel Stenberg characterized the hype around Mythos as mostly a “successful marketing stunt.”

Stenberg wrote that he participated in Anthropic’s Project Glasswing, which allows companies to submit their code to Anthropic to be analyzed by Mythos for security flaws.

The developer eventually received a report from Anthropic that listed five “confirmed security vulnerabilities.” But after closer inspection, Stenberg and his team determined that only one of them was a legitimate, unknown security issue.

“My personal conclusion can however not end up with anything else than that the big hype around this model so far was primarily marketing,” Stenberg wrote. “I see no evidence that this setup finds issues to any particular higher or more advanced degree than the other tools have done before Mythos.”

Explore more on these topics

Share this story

Sign up for our newsletters

Subscribe and interact with our community, get up to date with our customised Newsletters and much more.