What the AI Vendor Landscape Reveals About Fragmented Identity Systems

What the AI Vendor Landscape Reveals About Fragmented Identity Systems

“It started as a fairly contained, geeky exercise. AI has been this all-consuming thing for the last few years.”
 
Every vendor at every conference had some kind of AI story, real or imagined. Every pitch referenced automation, intelligence, or decision-making. The term was doing a lot of work, and not always in a way that clarified what the technology actually did.
 
So I decided to see what would happen if I took a step back and approached it more methodically, looking not just at the vendor description but also at how the vendor described their solutions and where that problem sits in the broader identity and security stack. I combined information from RSAC and Identiverse 2026.
 
That whole exercise turned out to be more interesting to me than expected. I saw definite layers where vendors tended to cluster around specific problems. I did NOT see much in the way of orchestration between the layers.
But let’s start with how I got there from here.
 
 
What the AI Vendor Landscape Reveals About Fragmented Identity Systems - A Digital Identity Digest
A Digital Identity Digest
What the AI Vendor Landscape Reveals About Fragmented Identity Systems
Loading
/

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

 

Starting with the problem, not the label

The first pass was deliberately simple. RSAC even let me set an “AI” tag to narrow the list of vendors to review during the event (though that tag seems to be missing now). For each vendor, I looked for a few basic signals:
  • What inputs does their product seem to rely on?
  • What output does it produce?
  • Where does that output get used?
This was less about evaluating product quality and more about locating function. Once you strip away the terminology, most systems are easier to place than their positioning suggests.
 
Some vendors were clearly focused on establishing or managing identity—human or non-human. Others were ingesting large volumes of behavioral or environmental data and turning that into risk or context signals. A third group was concerned with policy: defining what should or shouldn’t be allowed under certain conditions. And then there were systems responsible for enforcement or execution—actually carrying out decisions, whether that meant granting access, triggering a workflow, or blocking an action.
 
None of this is new on its own. These categories have existed in one form or another for years.
What was different was how consistently vendors aligned to one of these roles, even when they described themselves in much broader terms.
 

Patterns emerge at the boundaries

As more vendors were mapped this way, I saw a pattern. The ecosystem wasn’t organizing itself around products or even technologies, which makes sense. It is organized around functions within a larger system.
You could describe that system in a few different ways, but one framing held up across most cases:
  • Identity: who or what is acting
  • Signals: what is happening or being observed
  • Policy: what should be allowed
  • Enforcement: how that decision is applied
  • Execution: what actually occurs as a result
Individually, each category made sense. Most practitioners would recognize them, even if they use slightly different terminology in their own environments.
 
What was less clear, at least to me and what I was reading, was how these pieces were expected to work together.
Vendors tended to describe their ‘solutions’ in isolation, occasionally referencing integrations or partner ecosystems, but rarely articulating how decisions flowed across the entire system.
 
There was no shared model for how identity-informed signals, how signals influenced policy, or how policy translated into consistent enforcement and execution. I’ll get to the question of using standards in a separate post (tl;dr: it’s not pretty).
 
And yet, in practice, according to conversations at The Identity Salon, that is exactly what organizations are relying on.

 

The implicit system behind the architecture

At this point, the exercise stopped being about vendor positioning and became more about the implications of the system. If each of these components is operating independently—often from different vendors, sometimes from different generations of infrastructure—how are decisions actually being made in deployed environments?
 
An access decision, for example, might depend on an identity provider, a device posture check, a behavioral risk score, and policies defined in a separate system. The enforcement point may sit somewhere else entirely. Each of these components contributes something necessary, but none, on its own, represents the decision.
 
The decision is assembled, and that assembly process is rarely described as a first-class concern. It’s treated as an implementation detail, even though it is where inconsistencies, gaps, and unintended behaviors tend to surface.
 
Looking back at the vendor landscape, then, the earlier pattern becomes more significant. What initially appeared to be a loose clustering of capabilities starts to look more like a fragmented implementation of a single, distributed function.

 

A shift in how to think about identity systems

If identity, signals, policy, and enforcement are all contributing to a common outcome, then the system they form is not just an identity system, or a policy system, or a detection system. It is a decision system—one that happens to be distributed across multiple components, teams, and often vendors.
 
That may sound like an unimportant distinction, but I think it changes where you look for problems.
 
Instead of focusing only on whether each component is functioning correctly—which, don’t get me wrong, is both difficult and important—you start to ask whether the system produces decisions that are consistent, explainable, and aligned with intent. You look more closely at how data moves between components, how assumptions are translated (or lost) across boundaries, and how conflicting inputs are resolved.
 
Those questions don’t have simple answers, and they’re not ones most vendors are incentivized to address directly.

 

Why start here

My initial goal was to make sense of a noisy vendor landscape. What I got instead was a different way of looking at the systems many organizations already have in place. The components are familiar. The interactions between them are not always well understood.
 
That gap is where the more interesting questions sit, which makes me all sorts of excited to dig in.
 
In the next several posts, I’ll build on this by looking more closely at how decisions are actually constructed across these systems, where the seams tend to break down, and what that means for both standards development and real-world deployments.
 
📩 If you’d rather receive an email than hope you catch the social media announcement when a new post is live, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

Introduction

Welcome to another edition of the Digital Identity Digest, the audio companion to the blog at Spherical Cow Consulting. In this episode, we explore a surprisingly revealing journey into the AI vendor landscape—and what it uncovers about modern identity systems.

At first glance, this topic may seem like yet another take on AI hype. However, as you’ll see, the real story runs deeper. It’s not just about artificial intelligence—it’s about how decisions are made across fragmented systems.


A Different Way to Look at the AI Market

Over the past few years, AI has become impossible to ignore. It shows up everywhere:

  • Conference presentations
  • Vendor pitch decks
  • Product rebrands
  • Industry panels

Tools that were once labeled as:

  • Analytics platforms → now “AI-driven analytics”
  • Workflow tools → now “intelligent orchestration engines”
  • Detection systems → now “autonomous decision systems”

Sometimes, these changes reflect real innovation. Other times, they are simply marketing.

So instead of asking “Is this really AI?”, a more useful question emerged:

  • What function does this product serve?
  • What problem is it trying to solve?

This shift in perspective turns out to be far more insightful.


Breaking Down Vendor Functionality

To better understand the landscape, each product was evaluated using three simple questions:

  • What inputs does it rely on?
  • What outputs does it produce?
  • Where are those outputs used?

Once you strip away the buzzwords, patterns begin to emerge.

Most systems rely on inputs such as:

  • Identity data
  • Behavioral signals
  • Device information
  • Policy rules
  • Human approvals

And they typically produce outputs like:

  • Risk scores
  • Allow or deny decisions
  • Alerts
  • Tokens
  • Workflow triggers

From there, you can determine where each tool fits within a broader ecosystem.


The Hidden Structure of Identity Systems

As vendors were mapped based on behavior, a clear layered structure appeared. Most tools fall into one of the following roles:

Identity Layer

Focuses on defining who or what is involved:

  • Users
  • Devices
  • Workloads
  • Service accounts

Signal Layer

Answers the question: What is happening right now?

  • Login anomalies
  • Device changes
  • Behavioral deviations

Policy Layer

Determines: What should happen next?

  • Access decisions
  • Authentication requirements
  • Risk-based controls

Enforcement Layer

Executes decisions:

  • Blocking sessions
  • Granting tokens
  • Triggering step-up authentication

Execution Layer

Handles outcomes:

  • Completing transactions
  • Triggering workflows
  • Moving data

At first glance, this looks like standard architecture. However, the reality is far more complex.


Why Fragmentation Matters

Although these layers appear neatly organized, they are deeply interconnected:

  • Identity influences signals
  • Signals inform policy
  • Policy drives enforcement
  • Enforcement shapes execution
  • Execution generates new signals

In other words, these tools don’t operate independently—they form a distributed decision system.

This distinction is critical.

Because when decisions are distributed:

  • No single tool owns the outcome
  • Quality depends on integration, not just performance
  • Failures can occur at the seams between systems

And that’s where things start to break down.


The Illusion of Order

A typical enterprise access flow might look clean and logical:

  • A user logs in
  • Identity is validated
  • Device posture is checked
  • Risk is assessed
  • Policy is applied
  • Access is granted or denied

Historically, these systems were deterministic, meaning:

  • Same inputs → same outputs
  • Decisions are predictable
  • Auditing and governance are possible

Even risk-based systems followed controlled logic.

However, behind the scenes:

  • Components come from different vendors
  • Systems are deployed at different times
  • Teams manage separate pieces
  • Data is interpreted inconsistently

As a result, decisions are often assembled through:

  • Integrations
  • Middleware
  • Scripts
  • Workarounds
  • Institutional knowledge

It works—but it’s fragile.


Enter AI: A New Layer of Complexity

Now, AI enters the picture.

Modern environments may include:

  • AI models summarizing alerts
  • Systems scoring behavioral anomalies
  • Tools recommending policy changes
  • Automation engines executing responses
  • Models classifying users or workloads

These capabilities can deliver real value. However, they also introduce a fundamental shift.

Instead of deterministic outputs, we now see probabilistic results, such as:

  • “Model confidence suggests elevated risk”
  • “Behavior indicates probable misuse”
  • “Action resembles prior abuse”

This creates a mismatch.

Because most organizations still rely on:

  • Deterministic controls
  • Clear audit trails
  • Predictable outcomes

The result? Increased uncertainty.


The Risk of “Magical Thinking”

It’s important to be clear—this is not an anti-AI argument.

AI can:

  • Improve detection
  • Handle scale
  • Reduce manual workload

However, it is not a magic solution.

If anything, AI can:

  • Amplify fragmentation
  • Obscure decision logic
  • Introduce inconsistency

In poorly structured environments, AI doesn’t create clarity—it accelerates confusion.


Rethinking Identity as a Decision System

This research leads to a critical realization:

Organizations are not just running identity systems or security tools.

They are operating a decision system that determines:

  • Who can act
  • What they can do
  • When they can do it
  • Under what conditions
  • With what level of trust

This was always true.

What’s changed is that now:

  • More decisions are inferred, not explicitly defined
  • More outputs are opaque
  • More logic is difficult to explain

That shift demands new thinking.


Better Questions to Ask

Instead of asking whether a product “has AI,” focus on deeper questions:

  • Does the system produce consistent outcomes?
  • Can decisions be explained after the fact?
  • Are results reproducible?
  • Do teams understand differences in outcomes?
  • Can policy intent survive across multiple tools?
  • Can automation drift be detected?
  • Can humans safely override decisions?
  • Are errors correctable and auditable?

These are not easy questions—but they are essential.


The Reality Behind Vendor Claims

Most vendor messaging avoids these complexities.

Instead, it emphasizes:

  • “Unified platforms”
  • “End-to-end solutions”
  • “AI-powered intelligence”

In reality:

  • Some platforms reduce complexity
  • Others simply move it elsewhere
  • Most enterprises remain hybrid environments

Because:

  • Legacy systems persist
  • Acquisitions happen
  • Regulations vary

Fragmentation doesn’t disappear—it evolves.


Why This Matters Now

AI doesn’t create the fragmentation problem.

It exposes it.

And in many cases, it makes it harder to manage.

This creates an important opportunity:

  • To rethink how decisions are made
  • To design better system interactions
  • To improve governance across layers

Because ultimately:

If you cannot explain how decisions are made today, adding AI tomorrow won’t fix that.


Final Thoughts

As you evaluate AI in identity or security, start with one simple question:

What decision is this system actually helping you make?

If that answer isn’t clear, pause.

Because the real risk isn’t adopting AI—it’s adopting it without understanding the system it operates within.


Looking Ahead

This topic goes far beyond a single discussion. Future explorations will dive into:

  • How decisions are assembled across fragmented systems
  • Where integration points fail
  • What standards can (and cannot) solve
  • What happens when systems disagree

Because in the end:

The system that wins a conflict defines your true architecture—not the diagram.


Conclusion

If this helped clarify the landscape—or at least made it more interesting—consider sharing it with a colleague.

Stay curious, stay engaged, and keep asking better questions about how decisions are really made.

Heather Flanagan

Principal, Spherical Cow Consulting Founder, The Writer's Comfort Zone Translator of Geek to Human

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Spherical Cow Consulting

Subscribe now to keep reading and get access to the full archive.

Continue reading