OIDF responds to NIST on AI agent security

Published March 11, 2026

The OpenID Foundation has submitted its response to a US government call for input on how to secure AI agent systems.

In March 2026, the Threat Modeling Subgroup of the OpenID Foundation’s AI Identity Management (AIIM) Community Group, filed a response to NIST’s Request for Information on securing AI agent systems (NIST-2025-0035). The RFI asked industry, academia, and security researchers to help shape future US government guidance on AI agent security.

This builds on work the AIIM community group has been doing since October 2025, when it published a whitepaper identifying the core challenges at the intersection of AI and digital identity. The foundation’s submission to NIST takes that analysis further, translating it into concrete recommendations for US government guidance. 

AI agents are software systems that can act autonomously, such as browsing the web, executing transactions and calling other services, on behalf of users or organisations. As their use accelerates, so do the security questions around how they identify themselves, what they’re permitted to do, and who is accountable when something goes wrong.

Sarah Cecchetti, Chair of the AIIM Threat Modeling Subgroup and Director of Product Management at Semperis, said: “The work of the OpenID Foundation’s AIIM community group is critical. Implementations differ wildly because this technology is so nascent. It takes experts coming together to see where the threats and hidden complexities exist. I’m very proud to have been part of this feedback to NIST that will help regulators to walk the fine line of offering security guidance while encouraging innovation.”

The core problem is not the technology, it’s the trust

The OpenID Foundation’s submission argues that the most urgent AI agent security risks are not technical failures, but failures of trust. Who authorised this agent to act? On whose behalf? Can that be verified? Today, most deployments rely on makeshift workarounds: manually managed access lists, unsigned credentials, and no clear chain of accountability. While these approaches may work for small scale use, they break down as AI agents operate across multiple organisations and systems.

The submission calls for a ‘trust fabric’ beneath the technical controls. This means a foundation that can verify credentials automatically, constrain what any agent is allowed to do, and trace actions back to accountable parties. Without it, systems are forced into a default of ‘allow everything’, which undermines both security objectives and regulatory requirements.

Security guidance that supports innovation

The OpenID Foundation is clear that better security guidance should not mean more bureaucracy. If security requirements are too burdensome, teams will cut corners to get things done. Rather than imposing prescriptive mandates, the submission asks NIST for guidance that points organisations toward emerging, practical standards, such as transaction tokens, workload identity federation, and authentication extensions for AI tool protocols.

Chris Phillips, Independent Identity Architect, Adiuco  said: "Participating in the OpenID Foundation’s AIIM Community Group response to NIST helped coalesce a wide range of ideas and emerging challenges into sharper focus. The group’s diversity and collaboration reflect the idea that none of us is as smart as all of us, which is exactly what’s needed if we’re going to shift from reacting to shaping how trustworthy computing evolves alongside AI and the software supply chain. 

“The work is ongoing, and we welcome others to join the conversation and experience first hand what it’s like to help shape one of the biggest shifts in identity in decades."

The full response can be read here.

For more information on how to get involved in this work, please visit the OpenID Foundation’s AI Identity Management (AIIM) Community Group.

About the OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy-preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, OAuth2 - the FAPI standard for interoperable, high security - has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.

To learn more about conformance testing and self-certification, please visit the OpenID Foundation’s FAQ section.

Tagged