See Full Article: https://news.clearancejobs.com/2025/12/08/pentagon-backs-cisas-push-for-secure-ai-in-critical-infrastructure/
The Cybersecurity and Infrastructure Security Agency (CISA) joined the Australian Signals Directorate’s Australian Cyber Security Centre (ADSC) in publishing new guidance on four key principles for critical operational technology (OT) owners and operators to understand the risks of integrating artificial intelligence (AI) into OT environments. The “Principles for the Secure Integration of Artificial Intelligence in Operational Technology” was designed around the Purdue Model Framework, which focuses on the hierarchical relationships between OT and IT devices and the network
The CISA and ADSC provided four key steps that included:
In recent years, it has gone from a buzzword to a technology used by millions of Americans and, perhaps even billions more, in nations around the world. The technology is being adopted, adapted, and integrated into OT at breakneck speed.
CISA is now calling for greater consideration of how the technology might be used.
“This guidance arrives at a critical inflection point,” said Denis Calderone, CRO & COO at Suzu Labs. “We’re seeing organizations rush AI deployments into operational environments with various rationales, but often without the security rigor these systems demand. The consequences of getting this wrong aren’t arbitrary or abstract. We’re talking about critical areas like water treatment, power grids, and manufacturing safety systems.”
The guidance could help companies understand that they are prepared to address unexpected outcomes of AI adoption and integration.
“What I appreciate about this framework is the focus on ‘AI drift,'” Calderone told ClearanceJobs via email. “We have seen evidence where AI models can degrade or behave unexpectedly over time, particularly in OT environments where the consequences of bad decisions can cause physical material outcomes. A mis-calibrated algorithm in a financial system costs money, while a miscalibrated algorithm controlling industrial processes can cost lives.”
However, the greater challenge will be adoption.
“OT environments are notorious for ‘if it ain’t broke, don’t fix it’ cultures, and frankly, they’re not typically built for agility either,” suggested Calerone. “Change management in these environments moves deliberately, often for good reason. Meanwhile, bespoke AI solutions are being stood up at breakneck speed by vendors and internal teams racing to capture efficiency gains. That mismatch is a recipe for trouble. Organizations that treat this guidance as a checkbox exercise will miss the point entirely.”
AI will continue to evolve, and companies need to be prepared.