Google Sounds the Alarm: AI-Driven Malware Can Now Evade Detection and Adapt on the Fly

See Full Article: https://news.clearancejobs.com/2025/11/07/google-sounds-the-alarm-ai-driven-malware-can-now-evade-detection-and-adapt-on-the-fly/

According to an update from the Google Threat Intelligence Group (GTIG), this year has seen a shift where bad actors aren’t just leveraging artificial intelligence (AI) for productivity gains, but are now deploying novel AI-enabled malware in active operations.

The findings from GTIG are an update to its January 2012 analysis, “Adversarial Misuse of Generative AI,” which detailed how government-backed threat actors and cybercriminals are now integrating and experimenting with AI across the industry, including throughout the entirety of an attack’s lifecycle.

GTIG identified that malware families, including PROMPTFLUX and PROMPTSTEAL, are now employing Large Language Models (LLMs) during execution....

Time to Catch Up?

Even as this may be a era in hacking and cyber attacks, it doesn’t mean it is too late.

“Google caught this while it’s still experimental, but the bad news is that once this capability matures, traditional security tools that rely solely on pattern matching will be almost useless except to defend against basic script kiddies,” Michael Bell, founder & CEO at Suzu Labs, also told ClearanceJobs.

He said this should be seen as another reminder of the importance of building security testing methodologies that assume AI-powered threats from day one.

“The underground marketplace for ‘AI tools purpose-built for criminal behavior’ isn’t coming in the future; it’s already here, and most enterprises aren’t remotely prepared for what happens when attackers have the same AI capabilities defenders do,” added Bell.

Moreover, AI-enabled malware may also mutate its code, making traditional signature-based detection ineffective.

“Defenders need behavioral EDR (Endpoint Detection and Response) that focuses on what malware does, not what it looks like,” said Michal. “Detection should key in on unusual process creation, scripting activity, or unexpected outbound traffic, especially to AI APIs like Gemini, Hugging Face, or OpenAI. By correlating behavioral signals across endpoint, SaaS, and identity telemetry, organizations can spot when attackers are abusing AI and stop them before data is exfiltrated.”