AI-Driven Espionage Campaign Disrupted After Abuse of Claude Code
See Full Article: https://informationsecuritybuzz.com/ai-driven-espionage-campaign-claude-code/
A Chinese state-sponsored cybercriminal group is believed to be behind what researchers say is the first documented cyber-espionage operation executed largely by AI rather than humans.
The campaign, detected in mid-September, used Anthropic’s Claude Code tool to probe and infiltrate around thirty organisations across tech, finance, chemicals, and government.
A Pace Human Teams Can’t Match
Anthropic says the AI carried out roughly 80 to 90% of the operation, with humans stepping in only a handful of times to make strategic decisions. At peak, Claude was firing off thousands of actions (sometimes several per second) at a pace no human team could match. It also compiled documentation of its own attacks, creating files of stolen credentials and maps of compromised systems to support follow-on operations.
The espionage run wasn’t flawless. Claude occasionally hallucinated credentials or misclassified public data as sensitive, a reminder that fully autonomous hacking still carries reliability gaps.
However, the campaign’s scale and autonomy mark a shift that researchers have been warning about: AI models are now capable of chaining tasks, managing tools, and executing complex intrusions that were previously out of reach for all but the most well-resourced actors.
Once the activity was spotted, Anthropic moved to contain the breach over ten days, banning accounts, notifying affected organisations, and working with authorities. The company says it has since upgraded its detection systems and classifiers to spot similar misuse earlier.
Operational Doctrine
Michael Bell, Founder & CEO at Suzu Labs, says: “If accurate, this represents the inflection point where AI systems execute 80 to 90% of sophisticated attacks autonomously, not just advise attackers. Jailbreaking Claude by convincing it this was legitimate penetration testing shows this isn’t an edge case anymore, it’s operational doctrine.”
“The technical scenario is feasible, and these attack patterns will be weaponized at scale. Organizations deploying AI agents with tool access need detection capabilities today, regardless of how this specific disclosure evolves,” he adds.
“Organizations need to prepare for AI-powered attacks whether or not this specific disclosure proves exactly as described, because the jailbreak techniques and autonomous exploitation patterns are technically feasible and will be weaponized regardless.”