Agentic AI in the Open Standards Community: Standards Work or Just Hype?

A group of bots representing the new AI family of activities.

Agentic AI in the Open Standards Community: Standards Work or Just Hype?

If you want to follow what’s happening in AI, it helps to know where the conversations are happening.”

That doesn’t just mean the headlines and white papers; it means the standards bodies, working groups, and protocol discussions shaping the infrastructure AI systems will have to live with (and live inside). Some of these efforts put “AI” right in the name. Others are quietly solving problems that have been around for a while, which AI has now made urgent.

At IETF 123 in Madrid, AI topics were everywhere, sometimes explicitly, sometimes not. Just like every other event I’ve been to this year, it’s clear that AI is no longer a side topic. But it’s also not one big monolith. A working group with “AI” in the title might be useful, or it might be entirely orthogonal to the problems you’re facing. And meanwhile, some of the most critical technical work is happening in groups that never mention AI at all.

This post is a snapshot of both: a look at where the “AI conversations” are happening in the standards world, and where the deeper technical groundwork is being laid, whether or not anyone’s calling it AI.

Agentic AI in the Open Standards Community: Standards Work or Just Hype? - A Digital Identity Digest
A Digital Identity Digest
Agentic AI in the Open Standards Community: Standards Work or Just Hype?
Loading
/

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

Where AI is the elephant in the room

Some of the most relevant work wasn’t framed as AI-specific at all… at least, not when it started.

Delegation chaining, for example, is a topic that’s been simmering in OAuth land for a while. The identity chaining draft defines a way to preserve identity and authorization information across trust domains. Useful for distributed architectures in general and now getting a lot more attention thanks to agentic AI models that need to act across domains, on behalf of users, and maybe other agents.

If you’re designing systems that involve third-party APIs, partner orchestration, or AI-driven workflows, this isn’t theoretical. It’s the difference between “this agent can complete a task” and “this agent just leaked PII across environments you can’t audit.” (This is often what’s happening right now; it’s a terrifying prospect, but I digress.)

Same story for WIMSE (Workload Identity in Multisystem Environments). AI doesn’t appear in the charter, but the group is wrestling with exactly the kinds of problems that show up when AI agents act like software workloads, make API calls, and need identity and trust across services.

These efforts weren’t built for AI, but they are shaping the environment in which AI agents will operate.

Where AI is the headline

There’s also a growing set of efforts waving the AI banner from the start. Here are a few places to watch if you want to keep a product roadmap aligned with emerging standards and activities.

AI Preferences (IETF AIPREF)

This working group is focused on standardizing how people (and systems acting on their behalf) express preferences about how their data is used in AI systems. Think training, inference, and deployment. Their charter is about giving users the power to say “yes,” “no,” or “only under these conditions.”

Why this matters: Consent banners and privacy policies are blunt instruments. If your app collects user content, you might soon need a finer-grained way to handle “don’t train on this” or “only use for personalization.” Product teams working on personalization, LLM features, or customer data ingestion should keep this on their radar.

Web Bot Authentication (BoF)

Born out of a hallway conversation, the Web Bot Authentication group is asking what it means to authenticate bots—especially AI-powered ones—when they interact with websites meant for humans.

Why this matters: If your web properties are being used (or abused) by AI scrapers, this work could define how to tell the difference between legitimate agents and free-riders. This could impact content licensing models, rate-limiting strategies, and even customer support bots.

AI Agent Protocol (side meeting)

This one hasn’t formalized into a working group yet, but a side meeting at IETF 123 kicked off discussions about protocols for AI agents to act autonomously online by invoking APIs, collaborating with each other, making decisions, etc.

Why this matters: If you’re building or integrating with AI agents—anything from internal copilots to customer-facing assistants—expect questions soon about how they authenticate, how their actions are logged, and what delegation looks like at runtime.

(Also, please don’t schedule the next AI Agent meeting opposite WIMSE again. Some of us have to clone ourselves as-is.)

Beyond the IETF

Other standards bodies are also entering the fray. Here’s a quick tour of where else things are heating up:

  • W3C AI Agent Protocol Community Group (CG) is developing protocols for AI agents to find each other, identify themselves, and collaborate across the web. It’s early days, but think of it as DNS and HTTP for agentic AI.
  • W3C AI KR CG is focused on knowledge representation, i.e., how to structure information so AI systems (and people) can reason over it consistently. It is relevant to anyone dealing with search, ontologies, or explainability.
  • OpenID Foundation AI Identity Management CG is mapping out how identity systems need to adapt to agentic AI. It’s not creating protocols (yet), but its members are watching government regulation closely.
  • 📩 If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here]

Signals to watch

Standards are slow… until they’re not. You don’t need to read every draft, but here are some signs that these efforts are going mainstream:

  • MCP (Model Context Protocol), which lets AI agents act autonomously by invoking APIs or services, is not a standard, but it’s being adopted or piloted by major platforms like cloud providers and browsers. To function securely, it depends on underlying standards for identity chaining, authentication, and authorization—things like OAuth, delegation models, and token handling.
  • Vendor AI agent SDKs start referencing delegation models or bot authentication best practices.
  • Your compliance team starts asking about AI consent and model provenance.

When that happens, product managers will need to have answers or at least know where to look for them.

If you’re building anything touched by AI

This is just one slice of what’s happening in the standards space. No one—myself included—can keep up with it all. And if I try to AI-clone myself, who knows what hallucinations might creep in! But hopefully there’s enough cross-pollination between these (and other) efforts that we won’t be reinventing wheels or missing blind spots entirely.

If you’re an architect, engineer, or product leader, now’s a good time to:

  • Start mapping where AI agents (or their proxies) may interact with your system
  • Review your assumptions about trust, delegation, and human intent
  • Assign someone to monitor the relevant working groups or participate, if you can

Standards work isn’t glamorous, but it’s how the internet keeps functioning. And right now, the decisions being made will shape how agentic AI interacts with everything from your login flows to your support tools.

With luck—and a little planning—the next wave of automation won’t break the web. Or your roadmap.

📩 If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

00:00:26 Welcome back to the Digital Identity Digest.

Today we’re diving into the latest round of AI buzz — but from the standards world. Specifically, we’ll unpack what happened at IETF123 in Madrid and how it connects to a much bigger, messier, and louder story: the infrastructure needed to support AI.

00:00:51 If you feel like there’s way too much going on in AI and standards right now to keep track of, you’re absolutely correct.

00:01:00 One of my goals in this episode is to give you a map — not of every working group or proposal, but of the most relevant conversations shaping the AI systems your teams will build on, run into, or be regulated by.


Agentic AI and Why It Matters


00:01:22 Let’s talk about agentic AI, because it’s especially interesting.

00:01:28 The term refers to AI systems that can take autonomous action.

  • Large language models integrated into agents that can invoke APIs
  • Systems that make decisions and complete multi-step tasks
  • Agents that interact across systems — for a user, or even for another agent

00:01:53 This is a big shift. Like most computing shifts, it won’t work unless the plumbing underneath is solid — identity delegation, authentication, policy enforcement.

00:02:18 So where is that plumbing discussed? Some of it happens in AI-specific groups, but much of the critical work is in mature standards groups that don’t even mention AI in their charters.


Delegation Chaining


00:02:51 One example is delegation chaining in OAuth/authorization.

00:03:01 This draft defines a way to preserve identity and authorization across multiple trust domains.

Why it matters:

  • AI agents often act on behalf of users across multiple systems
  • Without it, product teams end up “duct taping” credentials to every interaction — not scalable
  • A scheduling agent booking travel crosses multiple trust boundaries

00:03:44 This work began before AI hype took off — but agentic AI makes it urgent.


Workload Identity in Multisystem Environments (WHIMSY)


00:04:05 Another crucial effort is WHIMSY — short for Workload Identity in Multisystem Environments.

00:04:21 It tackles how services, bots, APIs, and AI agents assert identity across environments.

  • Relevant for agentic AI because these identities aren’t tied to human sessions
  • Helps establish runtime identity for autonomous systems

00:04:52 Takeaway: If it doesn’t say AI, it can still be vital to AI infrastructure.


AI-Focused Standards Groups at IETF


00:05:01 Of course, there are groups with AI in their name and charter.

AI Preferences Working Group (AI-Pref)

00:05:08 This group is creating a standard way for users (or systems) to express data-use preferences for AI:

  • Training
  • Inference
  • Deployment

The aim is to move beyond vague privacy policies toward technical mechanisms for enforcing user preferences.

Web Bot Authentication (BoF)

00:05:58 A discussion about how bots — especially AI-powered ones — should identify themselves when accessing human-oriented websites.

Questions under debate:

  • Are bots allowed?
  • How should they authenticate?
  • How do we distinguish helpful agents from malicious scrapers?
  • Who’s accountable when things go wrong?

AI Agent Protocol (Side Meeting)

00:07:02 This informal discussion explored whether the IETF should standardize protocols for AI agents to discover, invoke, and communicate.

Connections to existing work:

  • MCP (Model Context Protocol) is emerging in pilots
  • Secure use depends on OAuth delegation chaining and other identity models

Beyond IETF: W3C and OpenID Foundation


00:08:14 Standards work isn’t just at the IETF.

W3C Community Groups:

  • AI Agent Protocol CG – protocols for how agents identify, collaborate, and operate on the web
  • AI Knowledge Representation CG – structuring domain knowledge so AI systems can reason and explain themselves

OpenID Foundation:

  • AI Identity Management CG – mapping use cases, identifying gaps, tracking regulations
  • Not building protocols, but providing a regulatory and technical landscape view

What Product Teams Should Do


00:09:27 For product managers and executives, here are the practical takeaways:

  • Understand where delegation fits in your systems
  • Define identity for non-human actors — avoid relying on user credentials
  • Implement technical enforcement of consent for AI agent actions
  • Track compliance triggers early to avoid future architectural rework

00:10:44 Watch for signals:

  • MCP or delegation models adopted by major vendors
  • New authentication guidance for bots and agents
  • Increased compliance chatter about AI-related access

Final Thoughts


00:11:10 This is just one slice of a fast-moving standards space.

If the right people connect across groups, we can avoid duplication, fill gaps, and lay the groundwork for agentic AI that’s safe, scalable, and standards-aligned.

00:11:39 Keep your eye on the standards — even if your platform isn’t “AI-first,” its infrastructure is being shaped right now.

00:12:01 Thanks for listening. If you found this helpful, share it, connect with me on LinkedIn, and subscribe for more conversations that matter.

Heather Flanagan

Principal, Spherical Cow Consulting Founder, The Writer's Comfort Zone Translator of Geek to Human

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Spherical Cow Consulting

Subscribe now to keep reading and get access to the full archive.

Continue reading