IT Brief UK - Technology news for CIOs & IT decision-makers
United Kingdom
Google says AI-powered cyberattacks are already here

Google says AI-powered cyberattacks are already here

Thu, 14th May 2026 (Today)
Joseph Gabriel Lagonsin
JOSEPH GABRIEL LAGONSIN News Editor

Google has reported that threat actors are using artificial intelligence to develop exploits, improve malware and seek access to advanced language models. It said it had identified what it believes is the first zero-day exploit developed with AI assistance.

The findings came from Google Threat Intelligence Group, which said criminal groups and state-backed actors linked to China, North Korea and Russia are already integrating AI into cyber operations. The report described a shift from limited experimentation to broader use of generative models across the attack chain.

One of the most significant incidents involved a zero-day exploit tied to a criminal campaign planning mass exploitation of a vulnerability in a widely used open-source, web-based system administration tool. The flaw allowed a bypass of two-factor authentication, although it still required valid credentials.

Google Threat Intelligence Group said it worked with the software developer to disclose the issue and secure a patch. Mistakes in the attackers' implementation also appear to have reduced the chances of the exploit succeeding at scale.

According to Google, the code showed signs commonly associated with AI-generated output, including educational comments, a hallucinated CVSS score and a tidy Python structure. It added that it did not believe the exploit had been developed with Mythos.

John Hultquist, chief analyst at Google Threat Intelligence Group, said the activity showed that the security risks linked to AI were no longer theoretical.

"There's a misconception that the AI vulnerability race is imminent. The reality is that it's already begun. For every zero-day we can trace back to AI, there are probably many more out there.

Threat actors are using AI to boost the speed, scale and sophistication of their attacks. It enables them to test their operations, persist against targets, build better malware and make many other improvements. State actors are taking advantage of this technology, but the criminal threat shouldn't be underestimated, especially given their history of broad, aggressive attacks," Hultquist said.

State use

The report said state-linked groups from North Korea and China have shown particular interest in using AI for vulnerability research. One North Korean actor, APT45, was said to have used AI to validate thousands of exploits and expand its stock of tools.

Google also described efforts by threat actors to jailbreak models with fake expert personas and to prime systems with specialist vulnerability datasets. In one example, an actor used prompts framing the model as a senior security auditor examining embedded devices for remote code execution flaws.

Chinese-linked groups were also said to be using AI to support infrastructure development and concealment. Google linked APT27 to the use of Gemini in developing a fleet management application that may have supported an operational relay box network designed to obscure the origin of intrusion traffic.

Russian-linked actors were cited in two strands of activity. One involved AI-assisted malware used against organisations in Ukraine, where code families such as CANFAIL and LONGSTREAM contained large volumes of decoy logic. The other involved information operations using AI-generated or AI-altered media.

Agentic tools

The report also highlighted growing concern over agentic systems that can carry out tasks with less direct human involvement. Tools such as OpenClaw, Hexstrike and Strix are being deployed for reconnaissance, vulnerability probing and attack testing.

In one case, a China-linked actor was observed using agentic tools to probe a Japanese technology company for weaknesses. These systems can maintain context, switch between tools and verify vulnerabilities with limited oversight, making large-scale reconnaissance easier to sustain.

The report also examined PROMPTSPY, an Android backdoor that uses Google's Gemini API to interpret a device's user interface and issue commands such as clicks and swipes. The malware could replay biometric authentication gestures and resist removal through an invisible overlay placed on top of an uninstall button.

Assets associated with PROMPTSPY have been disabled, and no apps containing the malware were found on Google Play. Google Play Protect also blocks known versions on Android devices using Google Play Services.

Model access

Another theme was the effort by threat actors to secure scalable and concealed access to premium AI models. Some groups were using account registration scripts, proxy services and pooled API access to avoid billing limits and platform restrictions.

Google linked China-related clusters including UNC6201 and UNC5673 to tooling that automates premium account creation, CAPTCHA bypass and account cycling. The report said some of these actors also use relay services to combine access to multiple accounts from providers including Google, Anthropic and OpenAI.

It added that AI systems themselves are becoming targets through the wider software supply chain rather than through direct attacks on frontier models. The report pointed to compromises involving repositories and packages linked to tools including LiteLLM, where attackers allegedly stole cloud credentials and tokens from build environments.

Similar attacks on AI-related dependencies could give intruders access not only to traditional corporate systems but also to internal AI tools and models, which could then be used for reconnaissance, data theft or follow-on attacks.

Google said it uses its own AI systems to detect vulnerabilities and patch code, and that findings from malicious activity are fed back into product safety work. But the report's central message was that adversaries already treat AI as both a tool and a target, with criminal and state-backed groups moving from trials to regular use.