CISA Publishes Security Guidance for Using AI in OT
A collection of agencies have published guidance on the best way to defend AI deployments in operational technology (OT).
Such guidance seems necessary, given that on their own, AI and OT environments are two of the most sensitive, high-profile attack surfaces. AI is a prime target, due to the wide range of attack techniques emerging constantly, and OT because of its use in critical and industrial settings.
The guidance was authored by the US’s CISA, FBI, and NSA Artificial Intelligence Security Center; the Australian Signals Directorate’s Australian Cyber Security Centre; the Canadian Centre for Cyber Security; the German Federal Office for Information Security; the Netherlands National Cyber Security Centre; the New Zealand National Cyber Security Centre; and the UK’s National Cyber Security Centre.
As the 25-page document explained, large language model (LLM) deployments potentially can be used to increase efficiency and enhance decision making, but integrating AI into critical OT environments “also introduces significant risks — such as OT process models drifting over time or safety-process bypasses — that owners and operators must carefully manage to ensure the availability and reliability of critical infrastructure.”
The guidance aims to help operators understand AI and how it might be best used in OT environments; establish AI governance and assurance frameworks; and embed safety and security practices into OT-AI integrations. In OT settings, AI is used to analyze critical data, identify signs of anomalies in things like SCADA systems, provide system recommendations for operator decision making, optimize workflows, and more.
Elements of the guidance, particularly those that involve understanding how to best deploy AI technology, are reminiscent of previous recommendations from public sector agencies. For example, the UK government recently published guidelines on how to best utilize AI-powered coding tools in His Majesty’s Government.
Richard Springer, senior director of OT solutions at Fortinet (a vendor that contributed to the guidance), says widespread AI deployment in OT environments is limited. Most, he says, are still focused on foundational cybersecurity such as segmentation, asset visibility, patching, and basic detection and response. But because the stakes are so high, generative AI (GenAI) is less of a “not yet” and more a “never” for many operators, he adds.
“That said,” Springer continues, “there’s broad agreement that GenAI will eventually play a meaningful role: accelerating playbooks, assisting with diagnostics, supporting predictive maintenance, and helping operators manage increasingly complex environments. But any automation in OT must be bounded by a clear understanding of cause-and-effect, risk tolerances, and the absolute priority of safety and uptime, especially when people and critical infrastructure are on the line.”
The risks surrounding AI are fairly well established. At a low level, attackers can use natural language prompts to get LLM models to divulge sensitive data. In more severe circumstances, AI agents can be exploited to conduct remote code execution (RCE)-powered attacks against its own operator or even introduce new vulnerabilities that weren’t there previously. That’s in addition to other, better-known risks, such as AI hallucinations.
AI’s Risks in OT Environments
While the issues aren’t necessarily so different on paper when AI is introduced to OT, there are unique dangers due to the criticality and specificity of environments where OT is used. OT is used in manufacturing, energy, defense, water, medical, and other settings where the consequences of an attack are greatest.
For example, the guidance warned of the risk of AI models “drifting over time.”
Nathaniel Jones, vice president of security and AI strategy as well as field chief information security officer (CISO) at Darktrace, is one of the prime contributors to the document. He says AI models can “gradually diverge from their original training assumptions, and as the operational data changes, so it could provide recommendations that no longer align with safety limits.”
“It isn’t just that these models diverge over time or hallucinate, it’s that probabilistic AI outputs can introduce uncertainty into deterministic and very specific OT systems,” Jones explains.
In order to address a threat imposed by insecure AI, the guidance makes a wide range of suggestions. First, it asks OT organizations to educate themselves (and personnel) on AI, including the risks as well as how to use AI with security in mind from inception.
Second, it asks organizations to consider the OT business case for AI use, and to determine whether AI technologies are the most appropriate solutions for the org’s specific needs, rather than simply rushing headfirst into fancy, shiny new technologies. So while AI can introduce benefits for the right use case, it is also generally wise to avoid introducing unnecessary risk.
Organizations should also address data-related challenges, including understanding where data is stored and making sure models don’t have access to any more data than necessary.
Third, the authoring agencies recommend organizations establish clear governance and assurance frameworks, including establishing policies and accountability structures, integrating AI into existing security frameworks, conducting thorough testing and evaluation, and understanding how to navigate compliance.
Finally, OT organizations should embed oversight and failsafe practices into AI deployments. This means establishing the right monitoring mechanisms (such as human-in-the-loop decision making) and embedding the appropriate mechanisms so AI can “fail gracefully without disrupting critical operations,” the guide explained.
Defender Takeaways
As AI use in the enterprise becomes more common, it’s safe to assume more complex attack surfaces will pop up and more guidance like this will prove necessary.
Chris Grove, director of cybersecurity strategy for Nozomi Networks (which also contributed to the document), says CISA’s joint guidance reinforces a “central reality” for OT environments. “AI can accelerate decision-making, but any technology that influences critical processes should be deployed with caution and discipline,” Grove says.
Darktrace’s Jones says the guidance and the global coordination involved reflects concerns that AI in OT is a systemic risk, not a niche concern. “The fact that the guidance explicitly calls out LLMs as high risk in OT environments is a major win, since LLMs can hallucinate and potentially provide operators with incorrect information for decision making,” he adds. “This is especially critical now as, with new security standards that require behavioral analytics and anomaly detection, like NERC’s CIP-015, there is an assumption that organizations are using LLMs or generative AI to accomplish this. However, this guidance makes it clear that this is not the best machine learning or AI technique to achieve this accurately.”
link
