Site icon Spherical Cow Consulting

The MCP Bandwagon

The process idea for moving MCP from a defacto standard to a reviewed standard.

Business process and workflow automation with flowchart. Hand holding wooden cube block arranging processing management on white background

I’ve been thinking a bit about the whole Model Context Protocol (MCP) thing.

According to its documentation, MCP is ‘an open protocol that standardizes how applications provide context to LLMs.’ Given the growing dependence on large language models (LLMs), this is a big deal (remembering that all LLMs are a type of AI, but not all AIs depend on LLMs). If we’re moving toward a world where AIs are expected to do All The Things, interfacing with our applications and services, then having a universal adapter that lets AIs talk to everything is undeniably powerful.

But, me being me, as soon as I saw someone describe MCP as an “open standard,” I immediately had questions.

A Digital Identity Digest
The MCP Bandwagon
https://episodes.castos.com/681522ece1a7b2-97033376/2064463/c1e-9xn05hdn4q2cw2g00-jpd5mrpws2nm-il0fva.mp3
/

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

I’m not a developer; I can’t speak to MCP’s utility from a coding perspective. I’m not an architect, either; I can’t tell you where this fits best in your environment. But I am a standards person. So if you tell me something is an open standard, I’m going to want to know: who is standardizing it? What process are they following? Who’s doing the review? What are the intellectual property considerations?

What a standards organization does

Let’s back up for a moment. What does a standards development organization (SDO) actually do?

If you’ve never looked into it, you might joke that SDOs exist to slow things down under the weight of process. OK, fair. But let me ask: why is that always a bad thing?

That process guarantees broader review across multiple dimensions. For example, the IETF requires a security considerations section in every draft submitted for publication. They also mandate cross-area review, requiring input from the different areas: Internet, Web and Internet Transport, Applications and Real Time, Routing, and Security.

Over at the W3C, every spec is subject to privacy, internationalization, accessibility, and architectural review, often more than once during its lifecycle. And all the formal SDOs I’ve worked with have some kind of Intellectual Property Rights (IPR) policy, which goes beyond an open-source license to include licensing terms for essential patent claims.

You have to start somewhere

To be clear, most SDOs didn’t emerge fully formed. They started with a handful of people who weren’t satisfied with the existing processes or the politics. That’s how the WHATWG began, opting for a living-document model and less procedural overhead than the W3C. (Though in my opinion, they eventually ended up with just as much process, just differently shaped.)

So maybe MCP is on its way to becoming its own kind of SDO, at least for this protocol. That’s fine. You have to start somewhere. But if that’s what’s happening, I don’t know where those conversations are taking place. Is there a governance plan? An IPR framework? A mailing list that isn’t just announcements?

So, back to MCP

Anthropic, the company behind Claude, announced MCP late last year to address a real problem they’ve faced as an AI vendor: how to standardize context exchange between apps and LLMs—doing so openly and transparently. That’s great. I genuinely appreciate the transparency, and I understand the desire to move fast in an area evolving as rapidly as AI. However, when people hear ‘open,’ they often assume peer-reviewed, widely adopted, and safely maintained. But an open GitHub repo doesn’t guarantee any of that.

By publishing the spec openly and encouraging others to use it, they’re aiming for what’s often called a “de facto standard”, something widely adopted even in the absence of formal standardization. Go team!

But I’m still worried.

I like the structure of a formal review process. I also like being able to point to exactly who reviewed a spec for security, privacy, and accessibility, and knowing that those reviews weren’t optional. Knowing that if you implement something before those reviews are done, you’re doing so at your own risk is critical.

Will adding a process slow things down? Yes, but I don’t think that’s a bad thing. It’s better to go slower than to trip over a hazard that could’ve been avoided with just a bit more thoughtfulness.

Just look at what’s happened in other areas of tech when we skipped due diligence: poor API scoping, security vulnerabilities, and consent models that collapsed under legal scrutiny. A good standards process helps us catch those kinds of mistakes before they happen. I’ve been in those three-hour meetings. It’s not glamorous. But you know what happens in those meetings? Someone catches the thing no one saw coming.

What happens if MCP becomes the default, without structure?

Let’s talk about the future we might be sliding into without realizing it.

If MCP gets widely adopted without formal structure—no shared governance, no multi-stakeholder review, no clear IPR policy—we don’t get a win for open standards. We get a de facto protocol that locks in early design assumptions, possibly shaped by a single vendor’s priorities, and leaves little room for dissenting use cases or evolving needs.

That might sound fine in the short term. After all, it’s solving a real problem! But over time, we run into real consequences.

What if MCP bakes in patterns that work well for consumer apps but break down in healthcare or financial services, where regulatory audits and fine-grained access controls are non-negotiable? Or, what if it assumes context is something you pull from a user, rather than something the user pushes or consents to, reinforcing an architecture that sidesteps agency and privacy?

What if an entire ecosystem of tooling gets built around early versions of the spec, and then a major vulnerability or design flaw gets discovered, but there’s no defined process or authority to fix it in a way that maintains trust?

If MCP becomes the default before it’s ready, we risk not just technical debt but governance debt, which is harder to see but far more expensive to repay.

Call to action

None of this means MCP is flawed or untrustworthy. It’s solving a real problem, and it’s doing it more openly than many. But openness isn’t the same as structure. So here’s my CTA: does anyone want to help this protocol grow up a bit?

Maybe that means finding a home for MCP within an existing SDO or spinning up something new and bespoke. Either way, the structure needs to happen.

I don’t have any contacts in the project, so this is mostly a wish on my part. But if you have any ideas (and if you agree this is a concern), let’s chat!

Transcript

[00:00:00]
Welcome to the Digital Identity Digest, the audio companion to the blog at Spherical Cow Consulting. I’m Heather Flanagan, and every week I break down key topics in the world of digital identity—from credentials and standards to browser behavior and policy shifts.

If you work with digital identity but don’t have the time to follow every new specification or hype cycle, you’re in the right place.


What is MCP, and Why Is It a Big Deal?

[00:00:26]
Let’s get into it. I’ve been thinking about the Model Context Protocol (MCP)—a trending topic in the tech world.

[00:00:38]
According to its documentation, MCP is an open protocol that standardizes how applications provide context to large language models (LLMs).

[00:00:48]
And with the rise of LLMs as a dominant type of AI, this protocol could prove essential.

[00:00:56]
If we’re truly heading into a world where AI can:

…then having a universal adapter like MCP is a pretty compelling idea.

[00:01:14]
So I get the excitement—really, I do.


But What Does “Open Standard” Really Mean?

[00:01:16]
Me being me, the moment I saw MCP described as an “open standard,” I had questions.

[00:01:26]
Now, I’m not a developer or a systems architect. But I am a standards person. And when I hear open standard, I ask:

[00:02:00]
Because open can mean very different things.

[00:02:06]
For example:

[00:02:20]
But a true open standard means much more. It implies:


Zooming Out: What Do SDOs Actually Do?

[00:02:44]
Before diving further into MCP, let’s look at Standards Development Organizations (SDOs).

[00:02:54]
If you’ve never worked within one, you might think SDOs are where good ideas go to die—buried under mountains of email and bureaucracy.

[00:03:03]
And yes, it can feel like that some days.

[00:03:08]
But that process exists for good reason.

[00:03:10]
Take the IETF (Internet Engineering Task Force), for example. Every draft must include:

[00:03:43]
Or consider the W3C (World Wide Web Consortium). They require:

[00:03:57]
These aren’t easy. But they force critical thinking, which ultimately strengthens the standard.


Why Governance Matters

[00:04:27]
When a standard is finally published, developers, vendors, and even regulators rely on it. That trust only exists because of rigorous review and intellectual property safeguards.

[00:04:45]
Because with foundational tech, you really don’t want legal issues popping up years later.

[00:04:57]
All SDOs start small. For instance:

[00:05:22]
Eventually, though, governance always returns—just with different flavors and speeds.

[00:05:31]
Lesson? You can’t escape governance. If your idea scales, governance will find you.


Where Does MCP Stand Today?

[00:05:55]
Let’s come back to MCP.

[00:06:09]
Anthropic—the creators of Claude AI—released MCP in late 2024 to address a real challenge: how to standardize context exchange for LLMs.

[00:06:25]
They’ve done a commendable job being transparent. The spec is published, and adoption is encouraged. That’s what we’d call a de facto standard—it gains traction because it simply works.

[00:06:48]
But there’s risk here. If MCP becomes the norm without governance or formal review:

[00:07:07]
For example:

[00:07:19]
In short:

What starts fast becomes fragile. What feels open becomes proprietary.


We’ve Seen This Before

[00:07:33]
Consider:

[00:07:43]
These were not technical failures—they were governance failures. And they’re costly to fix.

[00:08:19]
This is what I mean by governance debt. It’s invisible—until suddenly, it’s not.


A Call to Action: Help Build the Foundation

[00:08:29]
To be clear: MCP is not broken. It’s solving a real problem, and it’s already more open than many other AI protocols.

[00:08:35]
But openness ≠ structure. A GitHub repo ≠ a standards body.

[00:08:37]
So here’s my question:

Who wants to help give MCP a real foundation?

[00:08:45]
That might mean:

[00:08:58]
I don’t care about the venue—IETF, W3C, something new. I care that we do something.

[00:09:06]
Because if this protocol is going to power the AI systems that touch every corner of our lives, we can’t afford to be careless.


Want to Help? Let’s Talk

[00:09:15]
If this interests you, or you want to know more, check out the full blog post.

[00:09:21]
Coming up next: a deep dive into what open standards really mean. I hope you’ll stick around.


Thanks for Listening

[00:09:33]
That’s it for this episode of Digital Identity Digest. If it helped clarify things—or at least made them more interesting—please:

You can always find the full written post at sphericalcowconsulting.com.

Stay curious. Stay engaged. Let’s keep these conversations going.


Exit mobile version