What Makes a Successful Standard?

What Makes a Successful Standard?

“Someone challenged me a few weeks ago to think about what makes a successful standard. The question came out of a concern that quite a bit of development work was happening… just not where the working group could see it.”

Since I’m focused on the standards development process and not on implementation, I realized I needed to sit down and really think this through. Where does one find a balance between real-world testing, deployment, and standards development? What leads to the most “successful” standard?

Success in standards work is often measured by adoption. It’s an easy metric to reach, but a difficult one to verify. In most open standards environments, there is no reliable telemetry. Standards organizations rarely, if ever, have a dashboard indicating who is running what in production. At best, we have anecdotes, conference talks, and the occasional public deployment. At worst, we have silence.

Even when usage signals do exist, they can be misleading. Large deployments often reflect market power as much as technical fitness. Early adoption can lock in assumptions that later prove brittle. And some of the most widely discussed specifications have shown impressive activity inside standards bodies while struggling to gain meaningful traction outside them.

So if adoption is a lagging and imperfect signal, the more useful question is not simply whether a standard is used; it is what conditions make meaningful adoption possible without distorting the work along the way.

Standards do not fail only when they are ignored. They also fail when the process quietly narrows who can participate and what concerns are taken seriously.

What Makes a Successful Standard - A Digital Identity Digest
A Digital Identity Digest
What Makes a Successful Standard?
Loading
/

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

The grounding power of implementation

There is real value in implementation-first cultures, and it is important to say that up front.

Code has a way of exposing fantasy early. It reveals the hidden assumptions in a clean, “pure” model. It surfaces performance cliffs, integration pain, and edge cases that looked manageable on paper. A specification that cannot be implemented should not advance. That is not, I believe, controversial; it is basic hygiene.

Implementation experience also protects against a different failure mode: the paper-perfect, reality-hostile spec. Standards work that remains purely conceptual can drift toward elegant abstractions that collapse the moment they meet legacy systems, regulatory constraints, or operational budgets. Running code forces contact with the messy parts of the world that specifications must ultimately survive. Ideal worlds are lovely, but I have yet to meet anyone who lives and works in one.

And for many developers, working code is the only credible signal that something is real. Specifications without implementations can feel academic, optional, or safely ignorable. Implementation-first cultures help standards avoid becoming interesting but irrelevant.

All of this matters. Implementation keeps standards honest. But honesty is not the same thing as health.

When implementation becomes the price of admission

The problem is not implementation-first in principle. I can easily see the value there. The problem comes when implementation becomes a gatekeeping mechanism.

In some environments, concerns are only taken seriously if they arrive with running code. Use cases are treated as hypothetical until someone has invested the engineering time to prove them. The implicit message shifts from “help us make this work for you” to something closer to “prove you deserve to be here.” That shift changes who participates.

Not necessarily who is right, but who has the time, budget, and organizational permission to experiment publicly. It privileges teams that can move quickly and absorb the cost of being wrong. It disadvantages regulated actors, risk-averse sectors, and individuals who cannot commit their employer to exploratory implementation work.

It also creates a quieter distortion. Early implementations tend to shape the mental model of the specification. The first working code becomes the reference behavior, the “obvious” path, the example others are measured against. Even when that implementation reflects one organization’s constraints, one deployment environment, or one particular threat model, it exerts gravitational pull on the work. That gravitational pull can be overcome with time and effort, but it’s definitely a factor that shapes the spec.

Perhaps most importantly, implementation-gated cultures penalize a particular kind of expertise: the ability to see failure modes early. The people most capable of identifying where a specification will break are often the least willing to write code for something they already know will not meet their needs. I know my favorite early question is, “What can possibly go wrong?” It helps me get around the adage of “just because you can, doesn’t mean you should.” Requiring implementation as proof of legitimacy can turn foresight into a barrier.

Implementation-first cultures keep standards grounded. But when implementation becomes the price of admission, they narrow the diversity of thought in the room. And standards are at their weakest when the room gets smaller.

The mismatch between standards and developer expectations

There is another tension here that has come up in conversation when I am trying to figure out how to get the implementation-driven people in the room where the standard is being developed. I need them to hear the other perspectives, and that doesn’t happen if they’re thinking of just their products.

Many developers are not looking to co-design the future of the web or identity infrastructure. They want something off the shelf. Something free. Something available across browsers and platforms. Something that works now. They are not especially interested in paying the coordination cost of “what should this ecosystem look like?” They would prefer that question already be settled.

This does not mean collaboration is impossible. Some developers are deeply engaged in shaping the systems they depend on. But that partnership is often fragile and situational. It does not scale easily to the broader developer population.

The result is a tension for my working groups, and for standards bodies in general, that I haven’t figured out how to resolve. High levels of activity in working groups and community forums do not necessarily translate into real-world deployment. And conversely, meaningful progress in the market can sometimes occur through quieter, more bilateral collaboration that never shows up as broad public participation.

If success is measured only by participation in the standards process, the picture can be misleading. If it is measured only by deployment scale, the picture can be incomplete.

The danger of optimizing for the wrong hierarchy

One useful diagnostic lens comes from the familiar prioritization principle of the W3C: users first, developers second, platforms third, and technical purity last. In the identity space, the stakeholders are even more nuanced. Users, relying parties, identity providers, and browser engines each experience the system differently and optimize for different risks.

Standards efforts sometimes invert this hierarchy without realizing it. Technical elegance and implementer alignment become the primary signals of progress. That inversion does not usually happen out of neglect or bad intent. It often happens because representation from the top of the hierarchy is sparse, assuming it is present at all. The people who regularly participate in standards work are rarely representative of typical end users. (Being able to talk about their parents’ use of technology is not the same as being a typical end user. Even in that case, the parent still has direct access to a person who can answer their questions.) Additionally, many developers, understandably focused on shipping product, do not have the time or incentive to engage deeply in standards development.

Given that reality, agreement among standards participants can become the most visible and measurable proxy for success. From inside the room, strong implementer alignment can look like healthy progress. It certainly feels good when it happens.

The implementation reality check

One could reasonably argue that this is precisely where implementation-first cultures help. Requiring working code is often seen as a corrective mechanism, a way to keep the work grounded in real developer needs and to avoid drifting into elegant but irrelevant abstractions.

There is truth in that view. Implementation experience does provide an important reality check. But it is not a complete safeguard.

Implementation-first environments tend to privilege the perspectives of those already positioned to build. They surface present-day feasibility very effectively, but they are less reliable at exposing future friction, ecosystem misalignment, or the concerns of participants who cannot justify early deployment. The risk is not that implementation signals are wrong, but that they are partial.

It is entirely possible to produce a specification that multiple platforms can implement cleanly and consistently, yet which solves no urgent problem for the people expected to use it or fails to account for constraints that only emerge outside the early implementer set. When that happens, the process may look healthy from the inside while the market quietly moves on.

Optimizing for consensus inside the room is not the same as optimizing for usefulness outside it.

Reality versus aspiration

There is another tension here that I see debated in working group discussions: Should a specification reflect only what has already been proven through implementation? Or is it allowed to be at least somewhat aspirational, capturing where the group believes the ecosystem needs to go?

On the surface, the implementation-first answer is appealing. If the spec only documents what has already been built and tested, adoption risk appears lower. Interoperability is easier to demonstrate. The path to deployment feels clearer. All good things.

But a purely descriptive approach has its own failure modes.

If standards only ever ratify existing practice, they risk becoming little more than documentation of the status quo. That can be useful—sometimes extremely so—but it limits standards work’s ability to smooth ecosystem transitions, address known structural gaps, or create the coordination points that emerging architectures may require. In fast-moving spaces like digital identity, waiting for full market convergence before standardizing can mean the window for meaningful interoperability has already narrowed.

At the same time, aspirational specifications carry real risk. When a group gets too far ahead of deployable reality, the work can drift into what looks coherent on paper but proves costly or impractical to implement. The industry has seen more than one technically elegant design struggle because it asked the ecosystem to move faster than the incentives—or the budgets—would support.

The productive space, uncomfortable as it is, lies somewhere in between.

Healthy standards work tends to be implementation-informed but direction-setting. It grounds itself in what has been proven possible, while still leaving room to guide the ecosystem toward better interoperability, stronger security properties, or more sustainable architectural patterns. That balance is delicate. Lean too far toward pure documentation and the standard arrives after the market has already fragmented. Lean too far toward aspiration and the work risks becoming admired but unused.

This is one reason competing implementations matter so much. They do more than validate syntax and wire formats. They help the group understand which parts of the design reflect durable ecosystem needs and which parts are still aspirational bets. When the gap between those two grows too large, adoption friction usually follows.

The goal is not to eliminate aspiration from standards work; that’s a terrible idea. The goal is to ensure that aspiration remains tethered to credible paths forward and that the group stays honest about which parts of the specification are proven reality and which parts are still leading the market.

Why competing implementations still matter

None of this negates the long-standing wisdom that reputable standards bodies look for multiple independent implementations before declaring success. That principle is worth its weight in platinum. Competing implementations surface ambiguities, expose hidden assumptions, and help ensure the specification is not simply a thin wrapper around one vendor’s architecture.

Size of deployment is informative, but it is not sufficient. A single dominant implementation can demonstrate viability while still masking interoperability risks or ecosystem constraints that only emerge under competition.

Standards work is, at its core, about coordination under uncertainty. It succeeds when the resulting specification can survive contact with multiple business models, multiple regulatory environments, and multiple technical stacks. That resilience rarely emerges from a monoculture.

A spec is not a product

Part of the confusion in how we talk about success comes from borrowing product thinking too literally. Products optimize for speed, differentiation, and user acquisition. They can succeed by being good enough for a well-defined market and can ignore edge cases for years if the core experience delivers value.

I believe that standards operate under different constraints. They are long-lived coordination mechanisms that must care about the long tail earlier than most products would prefer. They succeed not only by enabling present value but by constraining future harm and reducing ecosystem friction over time.

A specification should absolutely be informed by real products and real deployments. But when a single product experience begins to define the boundaries of the standard itself, the process risks mistaking local optimization for global fitness.

Products answer the question, “Does this work for us?” Standards must answer, “What breaks when others try to use this?”
Those are related questions, but they are not the same question.

What success actually looks like

If adoption alone is insufficient, and participation metrics are easy to misread, what does success look like for a standard? That’s the million-dollar question. So here’s my opinion.

It looks like a process where credible disagreement can be expressed without prohibitive cost. Where use cases are treated as early warning signals rather than feature requests to be negotiated away, and implementation experience informs the work without becoming the sole currency of legitimacy. Where competing implementations reveal the real interoperability surface, and the resulting specification survives contact with environments that were not in the room when it was written.

None of this is clean. None of it produces a single satisfying metric.

But, really, standards have never been about clean metrics. They are about building enough shared reality that independent actors can move forward without constant renegotiation.

Standards do not fail because people disagree. They fail when disagreement becomes too expensive to express.

I have a few other articles about standards development, if you’d like to know more:

📩 If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

What Makes a Successful Standard?

Welcome to Digital Identity Digest, the audio companion to the blog at Spherical Cow Consulting.

In this episode, I explore a deceptively simple question:

What makes a successful standard?

At first glance, the answer seems obvious. However, once you look more closely, the picture becomes much more nuanced.


The Question That Started It All

[00:00:30]

A few weeks ago, someone asked me what sounded like a straightforward question:

What makes a successful standard?

Given my background in standards development, you might assume this would be easy to answer. Yet the more I considered it, the more complicated it became.

The question arose from a concern:

  • A working group was actively developing something.
  • But much of the implementation work was happening outside visible group channels.
  • And that raised a deeper issue about balance.

So naturally, this led to an even bigger question:

Where does the balance sit between real-world implementation and formal standards development?


Is Adoption the Only Measure of Success?

[00:01:15]

The easy answer is this:

A standard is successful if it is adopted.

But adoption is only one piece of the puzzle.

In reality, measuring adoption inside most standards organizations is surprisingly difficult:

  • There is no reliable telemetry.
  • There are no “standards police.”
  • No dashboard shows who is running what in production.
  • At best, we have anecdotes, conference talks, and occasional public deployments.

Moreover, usage signals can be misleading.

Large deployments often reflect:

  • Market power
  • Strategic positioning
  • Existing ecosystem dominance

—not necessarily technical fitness.

So if adoption is a lagging and imperfect indicator, then perhaps the more useful question becomes:

What conditions make meaningful adoption possible — without distorting the process?

Because standards don’t just fail when they’re ignored.

They also fail when participation quietly narrows.


The Value of Implementation First Cultures

[00:03:13]

Before going further, let’s acknowledge something important:

Implementation-first cultures have real value.

Working code exposes fantasy quickly. It reveals:

  • Hidden assumptions
  • Performance cliffs
  • Integration pain
  • Edge cases that look fine on paper

A specification that cannot be implemented should not advance. That’s basic hygiene.

Additionally, implementation experience protects against another failure mode:

Paper-perfect but reality-hostile design.

Running code forces contact with:

  • Legacy systems
  • Regulatory constraints
  • Operational budgets

And that contact matters.

In short, implementation keeps standards honest.

But honesty is not the same as health.


When Implementation Becomes Gatekeeping

[00:04:58]

The problem isn’t implementation-first thinking.

The problem emerges when implementation becomes the price of admission.

In some environments:

  • Concerns are taken seriously only if accompanied by running code.
  • Use cases are dismissed as hypothetical until proven in production.
  • The message shifts from “help us make this work for you” to “prove you deserve to be here.”

And that shift changes participation.

Not who is right.

But who has:

  • Time
  • Budget
  • Organizational permission

It privileges fast-moving teams.

It disadvantages:

  • Regulated sectors
  • Risk-averse organizations
  • Individuals unable to commit to experimental deployments

Over time, that distorts the room.

And standards are weakest when the room gets smaller.


The Gravitational Pull of Early Implementations

[00:05:51]

There’s also a quieter distortion.

Early implementations shape the mental model of the specification itself.

The first working code often becomes:

  • The reference behavior
  • The default example
  • The benchmark others are measured against

Even when it reflects only one organization’s constraints.

That gravitational pull can be overcome.

But it requires time and sustained effort.

Meanwhile, implementation-gated cultures penalize a particular kind of expertise:

The ability to see failure modes early.

The people best positioned to say, “Here’s what could go wrong” are often the least willing to write code for something they already know won’t meet their needs.

Requiring implementation as proof of legitimacy turns foresight into a barrier.


The Participation vs Deployment Tension

[00:07:15]

Another tension emerges when trying to bring more developers into standards conversations.

Many developers simply want:

  • Something off the shelf
  • Something free
  • Something cross-platform
  • Something that works

They are not necessarily looking to co-design the future of identity infrastructure.

As a result:

  • High working group activity does not guarantee market deployment.
  • And meaningful market progress may happen quietly, outside formal participation.

If success is measured only by participation metrics, the picture is misleading.

If measured only by deployment scale, it’s incomplete.


Who Comes First?

[00:08:35]

At the World Wide Web Consortium (W3C), a familiar prioritization principle exists:

  • Users first
  • Developers second
  • Platforms third
  • Technical purity last

In digital identity, the stakeholder landscape is more complex:

  • End users
  • Relying parties
  • Identity providers
  • Data holders
  • Browser engines

Each optimizes for different risks.

Yet standards efforts sometimes invert priorities unintentionally.

Why?

Because representation from end users is sparse.

And agreement among implementers becomes the most visible proxy for progress.

From inside the room, that alignment feels healthy.

Sometimes it is.

But not always.


Documentation or Aspiration?

[00:09:59]

Should a specification reflect only what has already been proven?

Or can it be aspirational?

On the surface, the implementation-first answer is appealing:

  • Lower adoption risk
  • Easier interoperability demonstrations
  • Clear path to deployment

However, purely descriptive standards risk becoming little more than documentation of the status quo.

In fast-moving spaces like digital identity, waiting for full market convergence may mean:

The interoperability window has already narrowed.

On the other hand, aspirational specifications carry their own risk:

  • Elegant on paper
  • Costly in practice
  • Misaligned with ecosystem incentives

The productive space lies between those poles.

Healthy standards work is:

  • Implementation-informed
  • Direction-setting
  • Grounded in reality
  • Yet willing to guide the ecosystem

Too descriptive? Fragmentation.

Too aspirational? Admired but unused.


Why Multiple Independent Implementations Matter

[00:12:34]

Reputable standards bodies look for multiple independent implementations before declaring success.

And that principle matters.

Competing implementations:

  • Surface ambiguities
  • Expose hidden assumptions
  • Prevent vendor capture

A single dominant deployment may demonstrate viability.

But it can also mask interoperability risks.

Standards succeed when they survive contact with:

  • Multiple business models
  • Multiple regulatory environments
  • Multiple technical stacks

Resilience rarely comes from monoculture.


Products vs Standards

[00:13:28]

Part of the confusion comes from borrowing product thinking too literally.

Products optimize for:

  • Speed
  • Differentiation
  • User acquisition

They can ignore edge cases for years.

Standards operate differently.

They are long-lived coordination mechanisms.

They must:

  • Care about the long tail early
  • Constrain future harm
  • Reduce ecosystem friction over time

Products ask:

Does this work for us?

Standards ask:

What breaks when others try to use this?

Related questions.

Not the same question.


So What Does Success Actually Look Like?

[00:14:48]

If adoption alone is insufficient…

And participation metrics are easy to misread…

Then what does success actually look like?

Here’s my current working answer.

A successful standard is one where:

  • Credible disagreement can be expressed without prohibitive cost
  • Use cases are treated as early warning signals
  • Implementation informs the work without becoming the sole currency of legitimacy
  • Competing implementations reveal real interoperability surfaces
  • The specification survives environments that were not in the room when it was written

It does not produce a single clean metric.

But standards have never been about clean metrics.

They are about building enough shared reality that independent actors can move forward without constant renegotiation.

Standards do not fail because people disagree.

They fail when disagreement becomes too expensive to express.


[00:15:48]

That’s it for this week’s episode of Digital Identity Digest.

If this helped clarify things — or at least made them more interesting — share it with a colleague and continue the conversation.

Stay curious.
Stay engaged.
And let’s keep building better standards.

Heather Flanagan

Principal, Spherical Cow Consulting Founder, The Writer's Comfort Zone Translator of Geek to Human

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Spherical Cow Consulting

Subscribe now to keep reading and get access to the full archive.

Continue reading