Agentic AI and Authentication: Exploring Some Unanswered Questions

Agentic AI driving for a person

Agentic AI and Authentication: Exploring Some Unanswered Questions

Agentic AI is changing authentication faster than our identity models can keep up. We’ve built systems assuming users are human, but what happens when an AI agent, not the user, needs to authenticate on their behalf? Our current identity frameworks weren’t designed for this, and the gaps are starting to show.

🎙 Audio Blog

If you’ve read my earlier post, Are You Human? A Dive Into the Proof of Personhood Debate, you’ll notice some overlaps here. That post dove into the tricky questions around distinguishing humans from non-humans and explored some early efforts to tackle proof of personhood. This post extends that conversation by examining how agentic AI complicates the identity landscape further.

As if digital identity wasn’t confusing enough…

“Traditional” Federation Models

In federation models that assume a relying party (RP) and an identity provider (IdP), the process of authentication is kicked off by a user. From there, the RP and IdP have a lovely chat about what data is being requested (by the RP) and what data will be sent in response (by the IdP). These systems have served well in scenarios where it’s important to recognize the information may be about the user, but it does not belong to the user. (Example: a user’s affiliation with an organization is not something the user owns; it’s owned by the organization. The user should have control over releasing that information in most circumstances, but they do not own it.)

Enter agentic AI. What happens when an AI agent, not the user themselves, needs to authenticate?

  • Trust Boundaries: Can existing federation models extend trust to an agent acting autonomously on a user’s behalf? And if so, what ensures the agent’s actions align with the user’s intentions?
  • Granularity of Authorization: Federation models today operate on relatively coarse-grained permissions (e.g., scopes in OAuth). For agents, finer-grained controls may be necessary to specify “what” an agent can do. Those controls will also need to specify “when,” “where,” and “how.”
  • Auditing and Accountability: When an agent performs actions, how do we audit and attribute those actions to the user who authorized the agent, particularly if something goes wrong?

OAuth2 FTW! (Right?)

In some of my conversations with people deeply embedded in the authentication space, they suggest that OAuth2-based authentication should make this a non-issue. But I’m not too sure. While OAuth2 does provide a framework for delegation, where a user can grant an agent limited access to their resources, I’m not actually asking about the basic mechanics (but if you’d like to discuss those, this is an interesting post to start with). I’m asking about under what circumstances those mechanics can and should be used, and whether they actually solve for all the use cases.

OAuth2 Handles Delegation, But Not Autonomy

So, first, my understanding is that OAuth2 would allow a user to delegate access to an AI agent by issuing tokens with specific scopes and permissions. For example, a user could grant a personal assistant app access to their calendar or email. Brilliant! We need that.

However! OAuth2 assumes the agent is an extension of the user’s intent, not an autonomous actor. Agentic AI introduces the possibility that an AI might make independent decisions, adapt over time, or interact dynamically with other systems in ways that extend beyond the original scope of user intent. And before you say “that should never happen,” that horse is leaving the barn even as I type.

Here are some potential scenarios:

  • An AI-powered email assistant that autonomously schedules meetings and replies to clients—What happens if it books a flight without explicit user consent?
  • An AI stock trading bot that exceeds the user’s intended risk profile—Who is responsible?
  • A personal shopping assistant that buys items based on inferred preferences—What happens if it misunderstands intent?

Is it the user, the developer, or the service that may have manipulated the AI agent to behave in a particular way?

Granularity and Oversight

OAuth2 tokens typically grant access to resources but don’t provide detailed controls over how, when, or why those resources are accessed. So, in the use cases I’m worried about, an agent might need to send emails on your behalf but not at all times or for all recipients. As far as I know, OAuth2 has no native mechanism to enforce or audit such fine-grained boundaries.

For highly autonomous agents, a more nuanced authorization model is needed that could dynamically enforce user preferences or constraints as the agent operates.

Federated and Decentralized Contexts

In federated systems (systems that externalize their authentication and even authorization actions), introducing agentic AI complicates how trust is propagated across domains. OAuth2 scopes might suffice in single-domain scenarios but fall short in multi-stakeholder environments.

In decentralized identity systems, such as those using verifiable credentials, the challenge extends to ensuring agents can securely act as verifiers or holders without compromising privacy or security. More on this in a bit.

While OAuth2 provides a solid foundation for today’s delegation needs, addressing agentic AI in authentication may require:

  • Enhanced granularity of scope to define not just what an agent can do, but how and when it can do it.
  • Dynamic consent and revocation mechanisms that allow users to adapt permissions as agents operate.
  • AI-specific identity standards, potentially building on OAuth2 or incorporating concepts like decentralized identifiers (DIDs) and verifiable credentials, to handle autonomous behavior.

So, yes, OAuth2 is excellent for handling simple delegation. ut agentic AI isn’t just about delegation. It makes independent decisions, adapts over time, and interacts unpredictably. Agentic AI introduces complexities around autonomy, accountability, and trust that stretch beyond the protocol’s original design. The question isn’t whether OAuth2 can delegate, it’s whether it can adapt to the nuanced and evolving behaviors of autonomous agents. (I both enjoyed and freaked out a bit with this post on AI and end-to-end encryption, which is another part of the discussion.)

Comparing and Contrasting

FeatureHuman OAuth2 DelegationAgentic AI Authentication
Who initiates?User explicitly grants accessAI may act autonomously
BoundariesPre-defined scopesNeed real-time contextual controls
RevocationUser can revoke manuallyNeeds automated monitoring
AccountabilityUser is responsibleHarder to attribute actions

Verifiable Credentials and Identity Wallets: Issuer, Verifier, and Holder

The identity wallet model, with its issuer, verifier, and holder, offers an alternative framework for authentication. These systems are often designed with decentralization and user control in mind. What’s not to love? The growing ecosystem of mobile driver’s licenses and EU Digital Identity wallets that live on browsers and devices suggests some interesting conversations when it comes to agentic AI.

To make sure we’re all on the same page, here are a few quick and simplified definitions for the entities involved in identity wallet issuance and verification. The concepts are similar to RP and IdP.

  • Issuer: Provides credentials (e.g., a verifiable credential) to the holder.
  • Holder: Stores and manages credentials, presenting them to verifiers when needed.
  • Verifier: Validates the presented credentials.

Agentic AI and VCs

This is a beautiful system when it applies to humans. When applied to agentic AI, however, I have questions:

  • Agent as Holder: If an AI agent acts as a credential holder, how does it securely store, manage, and present those credentials? How do we prevent misuse if the agent is compromised? Should an AI agent ever be a credential holder at all?
  • Delegation: Can an agent act as a delegated representative of the user, and how do we enforce boundaries? For example, could an agent be issued temporary credentials with limited scope and validity? Probably, though some systems might not allow agents for various security and privacy reasons.
  • Selective Disclosure: Identity wallet models often emphasize privacy through selective disclosure, such as revealing only a user’s age without exposing their birthdate. How might agents balance this with their need to automate tasks efficiently? It’s like a self-driving car; it’s a self-managing agent.

The wallet model seems to offer more flexibility for agentic AI than “traditional” federation models, but it introduces a new layer of complexity. It has to ensure the AI agent’s autonomy doesn’t come at the cost of security or user control.

Glimmers of Direction

It’s no fun asking all these questions without having associated answers. Regardless, I suspect work is underway (or will be soon) in a few areas:

  • Delegated Trust Models: Expanding on existing delegation mechanisms to allow users to grant agents fine-grained, revocable permissions.
  • AI-Aware Standards: Developing new identity standards that explicitly account for agentic AI, whether by enhancing federation protocols or building on the identity wallet paradigm.
  • Contextual Authentication: Leveraging contextual signals—such as behavioral patterns, location, or task context—to authenticate agents dynamically.
  • Reputational Systems: Having people and systems “vote” on whether something is a person or a bot. I cringe a bit about the potential liability involved here, but the idea is definitely receiving some attention.

Ultimately, the questions of “if,” “how,” and “when” agentic AI and authentication will align are as much about societal and regulatory readiness as they are about technical feasibility. The opportunities are immense, but so are the challenges.

The identity industry has spent decades building systems for human authentication. Agentic AI changes the game. If we don’t rethink our identity models now, we’ll be retrofitting yesterday’s solutions onto tomorrow’s problems—and I’d rather avoid that headache. If you’re working in this space, let’s talk at IIW, EIC, or Identiverse!

Want to see how I tackle real-world challenges in digital identity and standards development? Check out my mini-case studies—short, practical lessons from projects where strategy meets execution.

If something sparks an idea for your team, let’s chat. I’d love to hear what you’re working on.

🚀 Get new posts straight to your inboxsubscribe here.

Heather Flanagan

Principal, Spherical Cow Consulting Founder, The Writer's Comfort Zone Translator of Geek to Human

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Spherical Cow Consulting

Subscribe now to keep reading and get access to the full archive.

Continue reading