Roads, Robots, and Responsibility: Why Agentic AI Needs Identity Infrastructure

Metaphor for identity infrastructure

Roads, Robots, and Responsibility: Why Agentic AI Needs Identity Infrastructure

“We don’t spend much time thinking about the roads we drive on—until one cracks, collapses, or dumps us somewhere we didn’t mean to be.”

Identity in the age of agentic AI? Same deal. It’s infrastructure. And just like a good road system, it needs to be engineered with care, built on solid standards, and ready for traffic we can’t even imagine yet.

Right now, autonomous agents are already taking actions on behalf of people and businesses—booking meetings, writing and summarizing emails, pushing code, moving money. Which means we should probably stop and ask: how are those identity and access decisions getting made? Are they secure? Reviewed? Built to best practices? Or are we flooring it across an uninspected bridge, hoping the potholes aren’t too deep?

The protocols making this possible—things like the Model Context Protocol (MCP) and Google’s Agent2Agent (A2A)—are still wet cement. If we want to go from today’s cow paths (cow poop included) to tomorrow’s superhighways, we can’t just slap on more lanes later. We need a strong identity layer poured in from the start.

This post is based on a keynote I gave recently at a large corporate event, where the audience was asking the right questions. If you’re building or maintaining systems that will eventually include autonomous agents, or you’re already there, this is for you.

Roads, Robots, and Responsibility: Why Agentic AI Needs Identity Infrastructure
A Digital Identity Digest
Roads, Robots, and Responsibility: Why Agentic AI Needs Identity Infrastructure
Loading
/

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

What I mean by identity, identity infrastructure, and agentic AI

“Identity” and “identity infrastructure” can mean different things depending on who you ask. (Get a hundred IAM professionals in a room and you’ll get a thousand definitions.) Since this is my blog post, here’s how I’m using the terms:

  • Identity – a persistent, verifiable representation of an entity—human or non-human—that other systems can use to decide what it can do, when, for what purpose, and under what conditions.
  • Identity infrastructure – the shared, stable, and standards-based systems, protocols, and governance that make those identities usable across teams, organizations, and technologies, securely, interoperably, and at scale.
  • Agentic AI – borrowing NVIDIA’s phrasing, an AI system (often powered by large language models) with sophisticated reasoning and iterative planning that can autonomously solve complex, multi-step problems. The key word here is autonomous. Generative AI creates content; agentic AI takes action.

Without grounding in these definitions, it’s easy to talk past each other. With them, we can focus on the real issue: building identity infrastructure that works across both human and non-human actors, especially when those non-humans are making decisions at machine speed.

AI’s upside is real, but it’s missing a foundation

When most people talk about AI, we talk about the upside:

  • Faster iteration cycles
  • Smart automation
  • Real productivity gains
  • Code generation
  • Helpful chatbots that can field questions at scale

GitHub’s Octoverse report showed a 59% surge in contributions to generative AI projects and a 98% increase in the number of projects overall. Many contributions came from India, Germany, Japan, and Singapore. Interestingly, they also reported that AI hasn’t flooded open source with low-quality junk—if anything, it’s drawing more people into development. (I’m not sure I believe their assertion about the junk. That doesn’t match what I’m hearing anecdotally, but then again, that’s why there are actual studies to balance perception with facts.)

That’s all impressive, even when the results aren’t perfect. These tools are still young, evolving fast, and unlocking new creativity across the stack.

But there’s a missing question in all this excitement: who is acting? On whose behalf? And with what authority?

That’s the identity layer. Without it, all this innovation becomes harder to govern, harder to scale, and harder to trust.

Agents are already in your systems

This isn’t hypothetical. Agents are in your tools, updating dependencies, answering tickets, creating calendar invites, summarizing documents, pushing code, and talking to customers.

Microsoft’s 2025 Work Trend Index reports that global leaders rank customer service, marketing, and product development as the top three areas for accelerated AI investment in the next 12–18 months. Seventy-three percent of leading-edge companies will use AI for marketing. Sixty-six percent for customer success. Even internal communications sees 68% adoption.

That’s a lot of automation acting in our name. Without clear identity controls, there is also a lot of potential for AI “marketing fails” or, worse, high-stakes errors.

A few examples:

These tools are powerful and fast—but oversight around identity and accountability hasn’t kept up.

Identity isn’t just a login box

Identity is infrastructure. And infrastructure is more than a username and password. When humans act, we typically have an audit trail: who did what, when, and why. We rely on login sessions, logs, access controls, and behavioral patterns.

But when AI agents act, especially ones with high autonomy, we need something more durable. We need fine-grained delegation models, audit trails tied to machine-driven decisions, and identity primitives that work across humans and non-humans alike.

  • Identity systems that recognize both human and non-human actors
  • Delegation models that can express “who can do what, for whom, under what conditions”
  • Clear provenance: who authorized the action, and is it appropriate in this context?
  • Verifiability—so we can prove what happened, after the fact

Without that infrastructure, the entire agentic AI ecosystem risks becoming a black box. And for security teams, DevOps leads, and auditors, that’s a non-starter.

The right questions lead to better systems

If an agent makes a change, you should be able to answer: Was it authorized? Who delegated the authority? What policy applied?

Microsoft’s report hints at this by asking leaders: how many agents are needed for which roles and tasks, and how many humans to guide them? Those are good but very surface-level questions.

We can push further:

  • Do you have enough data to clearly scope the role for an AI?
  • Can you give it only the access it needs, when it needs it, for the specific task at hand?

These questions aren’t just risk management. They’re a chance to improve system hygiene and clarity across the board.

Protocols are evolving but identity hasn’t caught up

You might be thinking: okay, so what’s out there to support this?

Protocols like the Multi-Agent Communication Protocol (MCP) and Agent2Agent (A2A) messaging are early candidates. They enable agents to communicate and coordinate in powerful ways. But they were designed to simplify agents’ communication with agents; they weren’t designed with identity in mind.

Even folks who helped shape OAuth are wrestling with how traditional delegation models fit—or don’t fit—into this space. The communication protocols aren’t broken, they’re just early. Identity hasn’t caught up yet.

And if we don’t make faster progress on these issues, we’ll be forever retrofitting trust into systems that were never built to handle it.

Why this can’t be proprietary

You might be tempted to solve this in-house. Build your own delegation model, your own trust chain, your own method for agentic AI authorization. This scenario freaks me out. If every organization invents its own approach to agent identity, we’ll end up right back where we started, in a world of fragile integrations, inconsistent assumptions, and big gaps in accountability.

We’ve ALL seen this before, and the result is always the same:

  • Fragile integrations
  • Misaligned assumptions between systems
  • Gaps in visibility and accountability
  • Security holes you can drive a nation-state through

That’s why open standards matter, not as a checkbox, but as the only viable way to scale trust across systems, companies, and industries.

And to be clear, “open” doesn’t just mean “you can download the spec.” It means:

  • Shared governance
  • Transparent development
  • Real-world applicability
  • Participation from a broad mix of stakeholders, including security, product, legal, and compliance

This isn’t easy work. But it’s the work that makes the rest possible. And when it works, we get something better than “compliant.” We get trustworthy infrastructure that scales.

What to do now—before the collapse

So where does that leave us?

If you’re building agentic AI capabilities into your platform, or even just experimenting with automation, you’re already laying infrastructure. The question is whether that infrastructure will support accountability, or collapse under the weight of delegation you can’t verify. Either we bolt identity onto agentic systems after the fact, or we treat identity like the infrastructure it is, and build it into the foundation.

You don’t need to have all the answers today. But you do need to start asking better questions:

  • Is identity part of the design, or bolted on later?
  • Are we modeling trust relationships clearly, or making assumptions?
  • Will our logs stand up in an audit, or are we relying on magic?

Start there.

And if you’re in a position to influence the broader direction of the industry, join a standards group. Challenge assumptions in product reviews. Push for interoperability, not lock-in. Make identity part of the foundation, not just a feature.

We don’t have to wait for things to fall apart. We can build roads we actually want to drive on.

📩 If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

Roads as a Metaphor

[00:00:29] Welcome back to A Digital Identity Digest. I’m Heather Flanagan, and today we’re going to talk about roads. Yes, roads. They’re an amazing metaphor, and I’m just going to drive this one all night long.

[00:00:42] We usually don’t think about the roads we drive on—until one cracks, collapses, or leaves us stranded somewhere we never meant to be.

[00:00:49] Identity in the age of agentic AI works the same way. It is infrastructure. And like any good road system, it must be:

  • Engineered with care
  • Built on solid standards
  • Ready for traffic we can’t even imagine yet

The Rise of Autonomous Agents

[00:01:04] Autonomous agents are already taking actions on behalf of people and businesses. They’re:

  • Booking meetings
  • Writing and summarizing emails
  • Pushing code
  • Moving money

[00:01:14] Which raises the key question: how are identity and access management decisions being made for those actions?

Are they secure? Reviewed? Designed according to best practices? Or are we flooring it across an uninspected bridge, hoping the potholes aren’t too deep?


Protocols in Wet Cement

[00:01:34] Many of the protocols enabling this—such as Model Context Protocol (MCP) and Google’s Agent-to-Agent (A2A)—are still wet cement.

[00:01:44] If we want to move from today’s cow paths (cow poop included) to tomorrow’s superhighways, we can’t just slap on more lanes later. We need a strong identity layer poured in from the start.


Defining Identity and Agentic AI

[00:02:19] Let’s pause and define a few key terms. Because “identity” can mean wildly different things depending on who you ask.

  • Identity → A persistent, verifiable representation of an entity (person or machine) that other systems use to decide what it can do, when, and under what conditions.
  • Identity Infrastructure → Shared, stable, standards-based systems and governance that make identity portable, interoperable, and reliable at scale.
  • Agentic AI → Borrowing from Nvidia: AI, usually powered by large language models, that doesn’t just generate code but plans and reasons through complex multi-step problems on its own.

[00:03:46] Generative AI writes things.
[00:03:52] Agentic AI acts on things.

And that difference matters.


Productivity Gains vs. Identity Risks

[00:04:11] Conversations around agentic AI often emphasize upsides:

  • Faster iteration cycles
  • Smarter automation
  • Productivity gains
  • Code generation
  • Scalable chatbots

[00:04:25] GitHub’s Octoverse report shows:

  • 59% surge in contributions to generative AI projects
  • 98% increase in overall projects
  • Growth driven by developers in India, Germany, Japan, Singapore, and Latin America

[00:05:15] But what’s often missing is the question: who or what is acting on whose behalf, and with what authority? Without identity, this innovation becomes harder to govern, scale, and trust.


Real-World Consequences

[00:06:19] Consider these examples:

  • An AI coding assistant that wiped out a startup’s production database.
  • AI-powered recruiting software that rejected qualified applicants based on age and gender, resulting in lawsuits.

[00:06:47] These tools are fast and powerful—but oversight around identity and accountability has not caught up.


Why Identity Infrastructure Matters

[00:06:59] Infrastructure is more than usernames and passwords. When humans act, we leave audit trails.

[00:07:15] But when AI agents act at machine speed, we need more durable systems:

  • Identity recognition for both human and non-human actors
  • Delegation models clarifying who can do what for whom
  • Provenance signals to confirm authorization
  • Verifiability to prove what happened

[00:07:42] Without this infrastructure, agentic AI becomes a black box—and that’s a nonstarter for security teams, DevOps leads, and auditors.


Open Standards, Not DIY

[00:09:34] You may be tempted to build your own delegation models and trust chains.

[00:09:42] Please don’t.

Doing so leads to:

  • Fragile integrations
  • Misaligned assumptions
  • Gaps in visibility and accountability
  • Security holes you could drive a nation-state through

[00:09:56] That’s why open standards matter—not as a compliance checkbox, but as the only viable way to create scalable trust across companies and industries.


Building Roads That Last

[00:10:27] If you’re building agentic AI capabilities, you’re already laying down infrastructure. The question is:

  • Will your road support accountability?
  • Or will it collapse under unverifiable delegation?

[00:10:49] Ask yourself:

  • Is identity part of the design—or bolted on later?
  • Are trust relationships clearly modeled—or just assumed?
  • Will logs stand up in an audit—or are you relying on magic?

[00:11:03] If you want to shape the standards of the future, join standards groups, challenge assumptions in product reviews, and push for interoperability—not lock-in.

[00:11:21] We don’t need to wait for the bridge to collapse. We can build roads we actually want to drive on.


Closing Thoughts

[00:11:28] Thanks for listening to A Digital Identity Digest. If this sparked questions or gave you something to debate, share it with your colleagues—the more voices in this conversation, the stronger our identity infrastructure can be.

[00:11:46] If you enjoyed this episode:

  • Share it with a friend or colleague
  • Connect with me on LinkedIn
  • Subscribe and leave a rating or review on Apple Podcasts or wherever you listen
  • Read the full post at sphericalcowconsulting.com

Stay curious, stay engaged, and let’s build identity systems that last.

Heather Flanagan

Principal, Spherical Cow Consulting Founder, The Writer's Comfort Zone Translator of Geek to Human

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Spherical Cow Consulting

Subscribe now to keep reading and get access to the full archive.

Continue reading