The AI System That Never Was

The AI System That Never Was

“I have an embarrassing backlog of reading about what’s been happening with AI over the last few years. Going through it chronologically turned out to be unexpectedly useful.”

One pattern stood out more than any single policy shift or technical breakthrough: the use and then decline of the phrase “AI system.”

A quick caveat before I go further. I didn’t set out to research the term “AI system.” This pattern emerged while I was reading broadly across policy, standards, and implementation discussions over several months. My source set is diverse, but not exhaustive, which means what follows is an observation from the field, not a claim of statistical completeness.

With that said, if you’re responsible for identity architecture, platform risk, or standards participation inside a real organisation, you’ve probably encountered this already.

You’re being asked to inventory “AI systems,” assign owners, and document risk — but what you actually operate are chains of models, tools, APIs, agents, and delegated workflows that cross teams, vendors, and sometimes jurisdictions.

This post is for the people trying to make that mismatch workable. It’s about why accountability, identity, and risk management are getting harder even as AI tooling improves.

The AI System That Never Was - A Digital Identity Digest
A Digital Identity Digest
The AI System That Never Was
Loading
/

You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.

And be sure to leave me a Rating and Review!

Defining AI Systems

“AI system” never really originated from engineering culture. It came from governance. It emerged in the late 2010s as policymakers searched for a neutral abstraction that could hold people, processes, software, data, and organizational responsibility without naming specific technologies. Model was too narrow. Algorithm was too technical. Application was too product-specific. So they reached for system.

You can see that move clearly in early policy work:

In that world, the abstraction made sense. An AI system was something you could point to. It had a name, a vendor, possibly even a deployment diagram. You could imagine building a compliance checklist around it.

But engineers never really adopted the term. They talked about models, pipelines, agents, tools, workflows, and platforms. They shipped stacks, not “systems.” As often happens, how people focused on governance and regulation talk about technology did not align with how people focused on standardizing and operationalizing technology.

That mismatch between the conceptual unit of governance and the operational reality of deployment is now landing on the desks of identity architects, security leads, and standards contributors who are expected to make it enforceable.

From bounded products to fluid systems

The OECD’s earlier work on defining “AI systems” still provides a useful baseline, even if it now feels historically distant:

  • AI systems were framed as having explicit objectives.
  • They were assumed to operate within stable boundaries.
  • Responsibility could be traced through a developer → deployer → user chain.

You can see that framing in the original definitional updates here.

That model worked when AI was mostly a feature embedded inside a product, which still happens in some cases today, but in more cases it has gone well beyond that.

A single user interaction may trigger:

  • A language model to reason about intent
  • A tool-calling framework to select APIs
  • A workflow engine to delegate steps to other agents
  • External services to execute actions across vendors and jurisdictions

At no point is there a single object that cleanly qualifies as the AI system. 

I expect engineers and technology architects are finding this entirely unsurprising, something of a “well, yeah, of course.” But it also feels like people involved in governance and regulation are missing that complexity and are experiencing misaligned expectations. 

The global divergence nobody talks about

Governments are no longer trying to define “AI system.” They’re drawing boundaries around behaviour, capability, and acceptable use, often without ever naming what, exactly, they are regulating. My suspicion is that many are still assuming the definitions from that early governance work, even as their policies drift far beyond it.

Recent examples from CSIS’s PacTech Pulse newsletter make this visible in practice. A few that stood out:

  • Australia declined to introduce copyright exemptions for AI training. This doesn’t just affect public research institutions. It reshapes how consumer platforms train recommendation systems and how enterprises build internal copilots.
  • Bangladesh published an AI Readiness Assessment. Here, AI is framed as national infrastructure capacity — a signal to multinational vendors and service providers about how deployments will be evaluated in local markets.
  • China introduced ethical review requirements into patent processes. This moves AI governance into intellectual property law, changing the incentives for enterprise R&D and product commercialization.

Each of these policies redefines the scope of AI accountability, but they do it without a shared object to anchor that responsibility. None of them tries to answer the question “what is an AI system?”

And yet all of them answer it in practice.

They define what is in scope, what is out of bounds, and who is responsible through consequences that land directly on consumer platforms, enterprise software, and the identity systems that glue them together, rather than defining terms.

That’s why the language still matters even when it isn’t used. The term is fading, but the expectations attached to it are being rewritten into law, trade policy, education, and economic development, and are being done so differently in every jurisdiction.

The missing chapter: standards are catching up 

The standards community is only now beginning to articulate the problem clearly and to design around it. This is very much a work in progress, and discussions occasionally become heated.

For years, standards work inherited the same language as policy: AI systems, high-risk systems, system lifecycle management. Those terms were good enough while models were embedded in products and objectives were explicit. Agentic workflows break that illusion.

A concrete example is the IETF WebBotAuth working group’s current use-case draft, which explores how bots and agents authenticate and act across domains, explicitly modelling delegation, identity, and authority boundaries rather than assuming a single, bounded system.

This work tries to create language that bridges technology with policy so that we’re all building towards the same thing. Standards language tries for nothing if not being precise in what it’s attempting to accomplish. 

Where governance language starts to wobble

So, this is where the story stops being about terminology and starts being about accountability.

When one country draws AI boundaries through copyright law, another through readiness metrics, and another through patent ethics, “AI system” stops behaving like a technical term. It becomes a policy placeholder, a word that points to responsibility without clearly naming the thing that carries it.

The term still appears in frameworks, white papers, and standards drafts. But it increasingly describes something nobody in production can inventory, because engineers were never building “AI systems” in the first place; they were building chains of capability.

Most governance models still assume that:

  • You can list your AI systems.
  • You can assign each one an owner.
  • You can scope risk by drawing architectural boundaries.

In practice, teams are deploying chains of capability, not standalone systems. Accountability blurs not because people are careless, but because the mental model they’re working from no longer matches what’s being built.

Digital identity: when the system no longer has a single “user”

Nowhere does this mismatch between governance abstraction and engineering reality land harder than in digital identity.

Most identity frameworks still rely on a simple triangle:

  • A human user
  • A service
  • A system acting on the user’s behalf

Agentic AI breaks that geometry almost immediately. We now see:

  • Agents initiating actions without direct human input
  • Delegation chains where authority is inferred rather than explicitly granted
  • Systems operating across multiple services while carrying partial, context-dependent identity

At that point, asking “who is the user?” stops being helpful. The better question becomes: who is accountable for this action? (Yay for the delegation challenge. It’s a thing.)

This isn’t just a security problem. It’s a language problem that turns into a governance problem.

When an AI agent books a flight, approves a workflow, or triggers a financial transaction, existing identity models struggle to express:

  • Scope of authority
  • Duration of delegation
  • Revocation paths when behaviour changes

Identity was designed to answer “who are you?” Agentic systems require us to answer “on whose behalf are you acting, under what conditions, and for how long?”

As AI dissolves into workflows, identity stops being a bolt-on control and becomes the connective tissue that makes accountability possible at all.

What this makes possible

The collapse of a single, tidy definition of “AI system” is something we all could see as an invitation to govern the world as it actually exists. Crazy talk, I know. 

When we stop trying to draw boxes around systems, we can start asking better questions:

  • What behaviours create risk, regardless of architecture?
  • How should delegation be expressed and constrained?
  • What does accountability look like across a chain of automated actions?

This opens space for:

  • Governance models that focus on outcomes, not components
  • Risk frameworks tied to capability and use, not model class
  • Standards that describe relationships and authority, not just software boundaries

In other words, the end of the neat “AI system” may be the beginning of governance that finally matches production reality.

A constructive call to action

Working through that pile of articles, policy drafts, standards notes, and newsletters wasn’t about keeping up. OK, it was a little bit. But it was fun to do because consuming all that material in one big chunk helped me think through patterns that only become visible when I step back: the abstraction we’ve been building regulations and governance around doesn’t match reality enough.

So maybe the question is not:

What is an AI system?

It’s:

What kinds of relationships are we willing to govern, and how?

If you work anywhere near digital identity, platform risk, or standards development, this is where your attention is most valuable:

  • Track how different regions are operationalising AI boundaries in real policy, not just in definitions.
  • Notice where governance language drifts away from what teams are actually building.
  • Engage in standards work that centres behaviour, delegation, and accountability, rather than trying to preserve a shrinking box called “the AI system.”

Let’s line up the pieces so that regulation, governance, and standards support each other the way they need to, especially for the world of AI.

📩 If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here

Transcript

From Headlines to Patterns [00:00:30]


After spending months immersed in AI governance materials, something shifts.

At first, the focus is on absorbing information. However, once the headlines fade, patterns begin to emerge. That moment—when repetition reveals structure—is what led to this episode’s central insight.

Importantly, this is not a statistically complete survey. Instead, it reflects sustained engagement across:

  • Policy briefs
  • Standards work
  • Regional newsletters
  • Meeting summaries
  • Multi-year governance efforts

The value lies not in completeness, but in pattern recognition.


The Quiet Disappearance of the “AI System” [00:01:35]


At the center of this episode is a deceptively simple phrase: AI system.

Once foundational in AI governance conversations, the term is now quietly fading—moving out of governance language and into standards debates.

Most people never noticed the term in the first place. Fewer will notice it disappearing.

Yet for anyone tasked with:

  • Inventorying AI systems
  • Assigning system owners
  • Documenting AI risk

…the problem has already become tangible.


Why “AI System” Never Matched Reality [00:02:30]


In practice, organizations don’t operate singular AI systems.

Instead, they manage interconnected chains of:

  • Models
  • Tools
  • APIs
  • Agents
  • Workflow engines

These components often span teams, vendors, and even national borders.

And yet, the governance question remains oddly simplistic:
“What AI systems do you run?”

There is no single object to point to—only choreography.


A Governance Term, Not an Engineering One [00:03:20]


The phrase AI system did not originate in engineering culture.

Instead, it emerged from governance efforts in the late 2010s, when policymakers needed a technology-agnostic way to talk about AI.

Other terms felt insufficient:

  • Model was too narrow
  • Algorithm too technical
  • Application too product-specific

“System” was broad enough to include people, processes, software, data, and accountability.

This framing worked well for early policy initiatives such as:

  • OECD AI Principles
  • Early drafts of the EU AI Act

At the time, it seemed possible to build compliance checklists around this abstraction.


When Production Reality Broke the Abstraction [00:04:45]


Engineering teams never truly adopted the term AI system.

They built and shipped:

  • Pipelines
  • Platforms
  • Agents
  • Interoperable stacks

For a while, this disconnect didn’t matter. However, modern production environments have exposed the gap.

Today, a single interaction may involve:

  • Intent reasoning by one model
  • Tool-calling frameworks selecting APIs
  • Workflow engines delegating tasks
  • Third-party services executing actions

At no point does a single, governable “system” exist.


The Governance Failure Beneath the Paperwork [00:05:45]


This mismatch isn’t just inconvenient—it’s a governance failure.

Why? Because if you can’t clearly name what you’re governing, you can’t:

  • Define success
  • Measure outcomes
  • Enforce accountability

Identity architects, security leads, and standards contributors are being asked to enforce controls over something that does not exist as a discrete object.

That tension is now unavoidable.


How Governments Are Working Around the Problem [00:06:40]


Rather than defining AI system directly, governments are increasingly governing by consequence.

Examples include:

  • Australia declining copyright exemptions for AI training
  • Bangladesh framing AI as national infrastructure capacity
  • China introducing ethical reviews into patent processes

Each approach avoids defining the system itself. Instead, they decide:

  • What is allowed
  • Who is responsible
  • Where accountability lands

As a result, the concept of an AI system becomes implicit—an uncomfortable and risky place for governance.


Why Standards Bodies Can’t Avoid the Definition [00:07:45]


Standards organizations don’t have the same luxury.

For interoperability to exist, terms must be defined.

Historically, standards inherited policy language, relying on phrases like:

  • AI system
  • High-risk system
  • System lifecycle management

Agentic AI workflows have shattered those assumptions.

A notable example is the IETF WebBotAuth Working Group, which is explicitly defining agents, bots, and authentication use cases to establish shared meaning.

Without definition, interoperability is impossible.


The Coming Identity Crisis in Agentic AI [00:08:30]


Digital identity frameworks are particularly vulnerable.

They were built around a model of:

  • A human user
  • A service
  • A system acting on the user’s behalf

Agentic AI disrupts this entirely.

Now we see:

  • Agents initiating actions autonomously
  • Delegation chains inferred, not granted
  • Identities shifting across services

The core question is no longer “Who clicked the button?”
It is “Who is accountable, and under what mandate?”


Language as the Foundation of Governance [00:09:00]


The issue is not that AI system is a foolish term.

The real problem is that it never became a shared object of understanding between governance and engineering.

Now, different communities are defining scope, responsibility, and accountability in incompatible ways.

That fragmentation will make governance, standards, and deployment significantly harder to align.

If we cannot clearly name what we are governing, we cannot govern it well.


Closing Thoughts and Next Steps [00:09:23]


As always, this episode is ultimately about language—and why precision matters.

If we want better AI governance, we must bring more rigor to how we describe:

  • Architectures
  • Responsibilities
  • Systems involved

Even when those “systems” are distributed, fluid, and uncomfortable to define.

Thank you for listening. Stay curious, stay engaged, and join the conversation next week.

Heather Flanagan

Principal, Spherical Cow Consulting Founder, The Writer's Comfort Zone Translator of Geek to Human

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Spherical Cow Consulting

Subscribe now to keep reading and get access to the full archive.

Continue reading