Identity Systems Don’t Make Decisions

Identity Systems Don’t Make Decisions

“If you spend a lot of time staring at vendor product descriptions and standard-track documents, a pattern emerges in how systems are described and evaluated.”

Identity stores, policy engines, risk services, device checks—each is treated as a distinct capability, with its own lifecycle, its own standards work, and often its own owner inside the organization. Architectural diagrams reinforce this view, presenting identity as a set of components connected by well-defined interfaces. That’s a lovely world. I’m not sure whose world, but it is lovely.

What tends to get less attention is what it really means to make a decision.

None of these systems, on their own, represents the full decision. Each component evaluates inputs deterministically within its own scope, but the outcome the business cares about is assembled across multiple systems. Within any given system, behavior is typically predictable. The difficulty shows up when decisions span multiple systems, each applying its own assumptions about how inputs should be interpreted.

That gap between the tools and the decision is easy to overlook because, most of the time, the system appears to work. Users log in. Transactions go through. Access is granted or denied. The outputs exist, and they are often treated as sufficient proof that the architecture is sound. But when something goes wrong—when access is granted inappropriately, or blocked without a clear explanation—the absence of a coherent decision model becomes difficult to ignore.

This is not a claim that identity systems are non-deterministic. They are not. The issue is that decisions are constructed across multiple deterministic systems, each operating with its own model of the world. Without a shared way to relate those models, the combined behavior becomes harder to explain, test, and evolve.

What I’m seeing looks less like a set of integrated systems and more like a distributed decision process.

Identity Systems Don't Make Decisions - A Digital Identity Digest
A Digital Identity Digest
Identity Systems Don’t Make Decisions
Loading
/

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

The decision is what the business cares about

At a certain level of abstraction, identity systems exist to answer a small number of questions. Should this user be allowed to access this resource? Should this transaction proceed? Is additional verification required? The answers to these questions are not properties of any single component.

A directory can assert attributes about a user, but it does not decide whether access should be granted. An authentication service can confirm that a credential is valid, but it does not determine whether the broader context is acceptable. A risk engine can assign a score, but it does not define what that score means in operational terms. Even a policy engine, which comes closest, evaluates rules based on inputs that originate elsewhere and may be interpreted differently depending on where they are consumed.

So the decision is assembled across these layers, often dynamically, and often without a single place where the logic is expressed in full.

In smaller environments, this can feel manageable. When the number of systems is limited, ownership boundaries are clearer, and assumptions are more likely to align. In larger organizations, that alignment becomes harder to maintain. Systems are introduced to solve specific problems, policies evolve in response to incidents or audits, and teams make localized decisions that make sense in isolation. Over time, the decision itself becomes fragmented as different parts of it are owned and interpreted differently.

How the assembly actually works

Let’s anchor this in a reality most teams recognize: a typical access request in an enterprise environment.

A user presents a credential, which is validated by an identity provider. The resulting assertion includes information about who the user is and, sometimes, how they authenticated. A device check may run in parallel, evaluating whether the endpoint meets certain criteria. A risk service may calculate a score based on behavioral patterns, network context, or known threat indicators. A policy engine evaluates rules that reference some combination of these inputs. Finally, the application enforces the outcome, occasionally adding its own logic along the way.

Each of these steps is reasonable when viewed independently. The identity provider is doing its job. The device check is operating within its own constraints. The risk service is applying its model. The policy engine is evaluating the rules it has been given. The application is enforcing what it understands to be the correct outcome.

Each system is internally consistent. The variability comes from how their outputs are combined.

The decision emerges from the interaction of these systems, and that interaction is rarely neutral. Each system encodes assumptions about what matters and how it should be interpreted within its own context. Those assumptions do not always carry cleanly across system boundaries.

A risk score may be calibrated with one set of thresholds in mind, while the policy engine that consumes it applies a different set. A device posture signal may be considered authoritative in one context and advisory in another. Authentication strength may be treated as sufficient evidence in some applications and insufficient in others.

This becomes more than an architectural inconvenience when those decisions are consumed by automated systems. Whether it’s a risk model influencing step-up authentication or an agent acting on behalf of a user, the expectation shifts toward reproducibility within a defined context. If similar inputs are interpreted differently across systems without clear boundaries, the overall behavior becomes difficult to reason about.

Where the friction shows up

The absence of a shared decision model does not usually present itself as a single, obvious failure. It appears instead as a collection of smaller, persistent issues that are difficult to trace back to a common cause. Great for finger-pointing, less great for root cause analysis.

One of the more common symptoms is inconsistency across contexts. A user may be challenged in one application but not in another, even though the underlying signals are similar. A transaction may be blocked in one environment and allowed in another, with no clear explanation beyond “that’s how the system is configured.” These differences are often attributed to policy variations, but the deeper issue is that the meaning of the inputs is not consistently defined across systems.

Another symptom is opacity. When a decision needs to be explained—whether for debugging, auditing, or user support—the answer is often reconstructed from logs scattered across multiple systems. Each system can explain its own behavior, but there is no single representation of the decision as a whole. The explanation becomes a narrative assembled after the fact, rather than a property of the system itself.

Change management introduces a different kind of friction. Adjusting how decisions are made—tightening access controls, incorporating new signals, responding to a new threat model—rarely involves a single change. It requires coordinated updates across systems, each with its own deployment cycle and its own interpretation of the relevant inputs.

There is also the question of failure handling. Services degrade. Signals arrive late or not at all. In these situations, systems fall back on default behaviors. Some fail open, prioritizing availability. Others fail closed, prioritizing security. These choices are often reasonable in isolation, but rarely aligned across the full decision path.

Historically, these inconsistencies have been absorbed by human judgment. A support team investigates, grants an exception, and the owner of that particular system adjusts the policy so that this doesn’t happen again in exactly the same way. As more of these decisions are automated or delegated to models, that buffer disappears. Variability that was once manageable becomes harder to contain, because downstream systems assume a level of coherence that the architecture does not explicitly provide.

Shifting the starting point

A different way to approach the problem is to begin with the decision rather than the components. Instead of asking which systems are involved, the initial question becomes: what decision is being made, and what information is required to make it?

That shift changes the framing of the problem, moving the focus from capabilities to outcomes, and from systems to semantics.

Once the decision is defined, the next step is to identify the inputs in terms that are meaningful to the decision itself. Identity attributes, authentication context, device posture, behavioral signals, and environmental factors all play a role, but the important detail is not where they come from; it’s how they are interpreted.

A risk score, for example, is only useful in the context of a decision if its meaning is clear. Does a high score indicate that access should be denied, or that additional verification is required? Is the score comparable across different contexts, or is it calibrated differently for different applications? These are questions about semantics, not implementation.

Only after those semantics are established does it make sense to map the inputs back to the systems that provide them. At that point, the architecture can be evaluated in terms of how well it supports the decision, rather than how well it implements individual capabilities.

Treating decisions as something you can model

One practical implication of this shift is recognizing that decisions themselves can be modeled, captured, and reasoned about.

In many environments, the closest approximation to this is an audit trail. Logs capture what happened at each step, and with enough effort, those logs can be used to reconstruct the decision. The reconstruction, however, is not the same as having a coherent representation of the decision.

A more deliberate approach treats the decision as an artifact. Not just the final outcome, but the structure of the decision: the inputs that were considered, the logic that was applied, and the rationale for the outcome. This does not require centralizing all decision-making in a single system. It requires making the decision visible in a consistent way, even when it is produced by multiple systems.

That visibility changes how the system can be operated. It becomes possible to explain outcomes without stitching together multiple narratives. It becomes easier to compare decisions across contexts and identify where they diverge. It becomes clearer how changes to inputs or policies will affect outcomes.

Transaction tokens

There are early signs that parts of the industry are starting to move in this direction, even if they are not always described in these terms.

Take the work happening in the Internet Engineering Task Force around transaction tokens. The draft on OAuth Transaction Tokens is, at its core, an attempt to bind a specific decision context to a cryptographically verifiable artifact. Instead of relying solely on loosely coupled signals and policies evaluated at runtime, the system produces something that represents the transaction itself—what is being approved, under what conditions, and with what level of assurance.

That does not solve the broader problem outlined here. The decision is still distributed. The inputs still come from multiple systems, and their interpretation still matters. But it does introduce a useful constraint: the outcome of that process can be captured in a way that is portable, inspectable, and tied to a specific moment in time.

In other words, it starts to treat the decision as something more concrete than a side effect of system interaction.

AI makes the cracks visible

None of this is new. Identity systems have been assembling decisions across components for years. What is changing is how visible the consequences have become.

AI-driven systems depend on decisions that are reproducible and explainable within a defined context. That is not a philosophical preference; it is a practical requirement. When a model produces an output, or when an agent takes an action, there needs to be a clear line back to why that outcome was considered acceptable.

If similar inputs are handled differently across systems without clear boundaries—depending on which application is involved, which policy engine evaluated the request, or which signals were available at the time—then the system becomes difficult to reason about or validate. What looks like acceptable variation in a human-driven process becomes a source of instability when decisions are automated or delegated.

There is also the question of attribution. When an AI system takes an action on behalf of a user, the decision is no longer just about access. It becomes a question of who—or what—was responsible for the outcome, and under what conditions that responsibility was granted. That requires a tighter binding between identity, context, and decision than most current systems provide.

Even when AI is used more narrowly—for example, in risk scoring or anomaly detection—it introduces another layer of interpretation into an already fragmented decision path. The output of a model is treated as a signal, but its meaning depends on how it is integrated. If that integration is inconsistent, the variability of the model compounds the variability of the system.

AI is not the source of the problem. It makes the existing gaps harder to ignore.

Accepting that distribution is not going away

None of this suggests that identity architectures should be simplified into a single decision engine. In most enterprise environments, that is neither realistic nor desirable. Decisions will continue to be distributed, shaped by performance constraints, organizational boundaries, regulatory requirements, and legacy systems.

The issue is not the distribution. It is the lack of a shared model across distributed components.

When distribution is treated as an implementation detail, decisions become emergent and difficult to reason about across system boundaries. When it is treated as a design constraint, it becomes possible to define how different parts of the system contribute to a coherent outcome.

That definition does not eliminate complexity. It does make the complexity more tractable.

A practical way to start

For teams that want to explore this shift, the starting point does not need to be a large architectural overhaul. It can begin with a single, critical decision path.

Take one decision that matters to the business—access to a sensitive system, approval of a high-value transaction, or a step-up authentication trigger—and map how it is currently produced. Identify the inputs, the systems that provide them, and the points at which interpretation occurs. Pay particular attention to where assumptions differ, where defaults are applied, and where additional logic is introduced outside of central policy.

That exercise often reveals more fragmentation than expected. It also provides a concrete basis for improvement, grounded in the reality of the current system rather than an idealized model.

What changes when you take this seriously

Shifting to a decision-centric view does not make identity systems simpler. If anything, it makes their complexity more explicit. The benefit is that the complexity becomes something you can reason about directly.

Instead of asking whether a particular system is working correctly, the focus moves to whether the decision is being made correctly. Instead of debating which component should be responsible for a behavior, the discussion centers on how that behavior affects the outcome.

Over time, this leads to a different kind of alignment. Teams that own different parts of the architecture begin to share a common understanding of what the system is trying to achieve. Changes can be evaluated in terms of their impact on decisions, rather than their impact on individual components.

It also exposes gaps that are otherwise easy to ignore. Inconsistent semantics, hidden decision points, and implicit failure handling become visible as part of the decision model, rather than as isolated quirks of individual systems.

The architecture does not become perfect. It does, however, become more honest.

📩 If you’d like to be notified of new posts rather than hoping you catch it on social media, I have an option for you! Subscribe to get a notification when new posts go live. No spam, just announcements of new posts. [Subscribe here

 

Transcript

Welcome back. In a previous discussion, we explored the AI vendor landscape and uncovered something unexpected.

Rather than simply analyzing products, the real insight pointed toward decision-making itself.

Now, let’s take that idea one step further.

While identity platforms, risk engines, and policy tools are often presented as separate systems, they are all contributing to a shared outcome. And importantly, no single system is actually making the full decision.


The Myth of the Single Decision Maker

In everyday conversations, teams often describe systems as if they act independently:

  • The identity provider allows the login
  • The fraud engine blocks the transaction
  • The policy engine requires step-up authentication
  • The device platform denies access

This shorthand is convenient. However, it hides an important truth.

Each system is:

  • Evaluating a narrow set of conditions
  • Applying predefined logic
  • Producing a specific output

But none of them, on their own, determines the full business outcome.

Instead, the real decision emerges from multiple systems working together.


A Federated Decision Process

When you look closely, enterprise identity and security environments operate more like a federated decision-making process.

This means:

  • Multiple systems contribute inputs
  • Different teams manage different components
  • Assumptions vary across tools

As a result, the actual decision often:

  • Lives nowhere in particular
  • Spans multiple boundaries
  • Is difficult to fully explain

And that should raise some concerns.


The Illusion of Clean Architecture

Vendor diagrams and standards documentation often present a clean, organized view:

  • Directory services in one place
  • Authentication clearly defined
  • Policy engines centrally located
  • Risk scoring neatly separated

It all looks:

  • Modular
  • Logical
  • Well-controlled

However, real-world environments are far messier.

In practice:

  • One system may come from an acquisition
  • Another was deployed after an audit
  • A rules engine was customized years ago
  • An application built its own authorization logic
  • Integrations rely on scripts—or worse, assumptions

Despite this, everything can still appear functional from the outside.


When Outcomes Don’t Match Expectations

Let’s consider a simple scenario.

A user logs in from a managed device while traveling.

Different systems evaluate different aspects:

  • Identity provider confirms valid credentials
  • Device platform reports a healthy endpoint
  • Risk engine flags unusual geography
  • Policy engine checks travel rules
  • Application applies exception logic
  • Support systems may include temporary overrides

Each component works as designed.

However, the final outcome may still be:

  • Inconsistent
  • Unexpected
  • Difficult to explain

Why?

Because each system interprets reality differently.


Deterministic Systems vs Distributed Decisions

Most enterprise systems are designed to be deterministic:

  • Same inputs should produce the same outputs
  • Behavior should be predictable
  • Auditing should be reliable

This is essential for:

  • Governance
  • Compliance
  • Operational trust

However, problems arise when:

  • Decisions span multiple systems
  • Systems interpret shared data differently
  • Meaning is not consistently aligned

This creates what is essentially a semantic problem disguised as an integration issue.


Why Troubleshooting Falls Short

When something goes wrong, organizations rely on logs to reconstruct events.

This approach helps—but it has limitations.

Typically, teams end up with:

  • Fragmented logs
  • Multiple dashboards
  • Partial explanations

Even if each system explains its behavior, you may still lack clarity on:

  • Which inputs mattered most
  • What logic was applied
  • Why one signal overrode another
  • How the final decision was reached

In other words, you get pieces of the story—not the full picture.


AI Raises the Stakes

These challenges existed before AI.

Previously, humans compensated by:

  • Investigating anomalies
  • Applying judgment
  • Granting exceptions
  • Fixing inconsistencies manually

Now, AI changes the equation.

Modern systems may:

  • Recommend actions
  • Score users
  • Automate responses
  • Operate autonomously

This requires greater clarity, not less.

Organizations must now define:

  • What AI outputs mean
  • How much authority they have
  • Which systems can override them
  • Who is accountable for mistakes

Without this, AI simply accelerates confusion.


A Better Starting Point

Instead of focusing on tools or platforms, start with a single decision.

For example:

  • Access to payroll systems
  • Approval of a financial transaction
  • Administrative access to production

Then ask:

  • What information is required?
  • Which systems provide that information?
  • Where is meaning interpreted or transformed?
  • Where do defaults and exceptions apply?
  • Who owns the outcome?
  • How is the decision rationale captured?

These questions reveal far more than any product demo.


From Systems to Decision Artifacts

Looking ahead, there is a shift toward treating decisions as discrete artifacts.

Emerging approaches include:

  • Cryptographically bound approvals
  • Transaction-specific authorization records
  • Context-rich attestations

These methods aim to:

  • Make decisions visible
  • Enable verification
  • Improve accountability

Importantly, they treat decisions as something concrete—not just a byproduct of system interactions.


Designing for Distributed Decisions

Distributed architectures are not going away.

Enterprises will remain:

  • Complex
  • Heterogeneous
  • Multi-system by design

Therefore, the focus should shift toward designing coherent decision processes.

This includes:

  • Aligning meaning across systems
  • Defining clear ownership
  • Establishing consistent logic
  • Improving cross-team collaboration

Because without intentional design, decisions become accidental.


A Shift in Thinking

Adopting a decision-centric approach changes how teams work together.

Instead of asking:

  • Which product is responsible?

Teams begin asking:

  • Is the outcome correct?
  • Is the signal meaningful?
  • Is the decision defensible?

Success is no longer measured by:

  • System uptime
  • Logging completeness

But by:

  • Consistency
  • Accuracy
  • Trustworthiness of decisions

A Simple Exercise

To better understand your own environment, try this:

  • Select one sensitive decision path
  • Ask three different teams to explain it

For example:

  • Security
  • Identity
  • Application owners

Then compare the answers.

You may find:

  • Conflicting explanations
  • Vague responses
  • Or uncertainty

Each outcome reveals something valuable.


Final Thoughts

Identity systems do not make decisions on their own.

Instead, decisions emerge from:

  • The entire IT environment
  • Multiple interacting systems
  • Shared—but not always aligned—logic

If no one has explicitly designed how decisions are made, then the system you have is likely unintentional.

And that carries risk.


Conclusion

The next step is simple.

Pick one important decision in your organization and trace how it is actually made—not how diagrams suggest it should work.

Because behind every decision, there is a story.

And understanding that story is the first step toward building systems that are not only functional, but trustworthy.

Heather Flanagan

Principal, Spherical Cow Consulting Founder, The Writer's Comfort Zone Translator of Geek to Human

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Spherical Cow Consulting

Subscribe now to keep reading and get access to the full archive.

Continue reading