Why existing AI security standards fall short and how we can do better

0
Why existing AI security standards fall short and how we can do better
COMMENTARY: By most measures, the digital world takes security standardization pretty seriously. We have general-purpose cybersecurity frameworks, like NIST and ISO 27001, to provide guidance on securing workloads of all types. We have standards dedicated to data privacy, like those defined in the GDPR. We have frameworks tailored to specific use cases, like payment processing (which is the focus of PCI DSS).Yet, one immense security standardization gap remains: The one surrounding AI. Despite the fact that 78% of organizations are now using AI, no straightforward, universal AI security standard exists that is comparable to other major cybersecurity frameworks.[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]That’s the bad news. The good news is that this problem is eminently solvable. It simply requires establishing clarity around which categories of risk an AI security standard needs to cover, and which controls it should include to mitigate them.

The fragmented state of AI security standards

The absence of a universal AI security standard isn’t because CISOs, CIOs and regulators haven’t paid attention to the issue of AI security. They have, resulting in the debut of several initiatives in recent years that aim to tackle AI security risks. Key examples include:

  • ISO/IEC 42001: Introduced in 2023, this voluntary framework was the first international AI management standard.
  • NIST AI Risk Management Framework: Also launched in 2023, this is another voluntary AI security standard.
  • E.U. AI Act: The European Union’s AI Act, which debuted in 2025, isn’t a security standard per se, but it includes specific control requirements for high-risk AI applications.
  • State-level frameworks: Various U.S. states (such as California through SB 53) have introduced legislation related to the regulation of AI security, privacy and/or transparency).
  • Industry frameworks: Certain industry groups and companies have also rolled out guidelines for AI security, such as Microsoft’s Responsible AI Standard, Google’s SAIF and recommendations from the Coalition for Secure AI initiatives.
  • The problem is that these standards are widely divergent in the way they define AI, let alone the security controls they require or recommend. Their levels of enforceability also vary widely, since most are voluntary, and one (the E.U. AI Act) only applies to companies that are based in or operate in a specific jurisdiction).Related reading:In other words, while industry influencers and regulators have tried to create something approaching an AI security standard, there is too much fragmentation, and too little enforceability, for these frameworks to deliver the protections businesses desperately need as they deploy AI systems that are subject to all manner of unique risks (like prompt injection and prompt leaks) that don’t impact traditional applications.

    Why the world needs better AI security standards

    The consequences of the lack of a universal AI security standard go far beyond simply making it hard for businesses to decide which voluntary AI security framework to use, or which controls to implement.They lead to critical gaps in areas including:

  • Legal risk: Without clear compliance baselines or security certification processes, it’s challenging for businesses to ensure that they are meeting their legal obligations in the realm of AI security.
  • Reputational risk: The lack of clear standards also means that organizations are poised to experience greater reputational fallout from AI-related security breaches. They can’t state that they were following best practices because there are no universally accepted best practices.
  • Inconsistent security strategies: Varying AI security standards mean that different groups within the same organization may adopt multiple strategies, leading to chaos and inefficiency. This is especially true for multi-region organizations.
  • Toward a universal AI security standard

    Businesses can’t go on like this. They can’t keep increasing their adoption of AI without clear standards to guide their approach to securing AI investments.Hence the need for a new AI security standard — one that transcends the existing frameworks and provides the coverage and clarity that they sometimes lack.

    Such a framework should include the following foundational elements:

  • Leadership-based decision-making: Business leaders must buy into AI security standards and ensure that they set a tone that the rest of the business can follow.
  • Concrete, auditable controls: An actionable AI security standard needs clear-cut controls that are flexible enough to accommodate a range of use cases but specific enough to be actionable.
  • Global accountability with local flexibility: A workable framework must also be flexible enough to meet the needs of diverse jurisdictions while also ensuring global adoptability and enforceability.
  • Certification process: A clearly defined, independent certification process should spell out what companies need to do to demonstrate that they are meeting the AI security standards.
  • Practical implementation guidance: Businesses of all types and sizes, across all industries, need clear guidance on how to implement the standards.
  • Define a role for humans: Even in a world that is increasingly automated, humans have a role to play. The standards must define when and how humans need to remain “in the loop” of AI-driven processes.
  • Data management model: AI systems are only as secure as the data that powers them. A universal standard needs to include guidance on how businesses protect data across the AI/ML lifecycle.
  • It’s certainly possible to write a standard that meets these criteria. The challenge is simply in overcoming inertia such that stakeholders can actually collaborate to build a truly universal standard.The sooner they do so, the faster we’ll achieve a world where AI is as secure as it is powerful.

    link

    Leave a Reply

    Your email address will not be published. Required fields are marked *