The Regulator’s Dilemma
“In my last post, I wrote that resilience demands choice and that some dependencies matter more than others, and pretending otherwise spreads resources too thin. I joked that I was glad I didn’t have to make those decisions, but of course, I do.”
We all do, in our own way. Every time we vote, renew a passport, or trust a digital service, we’re participating in a system of priorities. Someone, somewhere, has decided what counts as critical and what doesn’t.
That realization sent me down another research rabbit hole about who decides what counts as critical, and what happens once something is labeled that way. How does accountability become infrastructure, and how do those rules, once written for slower systems, now shape the limits of resilience?
Critical infrastructure sounds like a technical category, but it’s really a political one. It represents a social contract between governments, markets, and citizens; a way of saying, “these systems matter enough to regulate.” The problem is that the rules we inherited for defining and managing critical infrastructure were written for a different kind of world. They were designed for slower systems, clearer ownership, and risks that respected national borders.
You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.
And be sure to leave me a Rating and Review!
Generational change
This brings me to something a friend and longtime identity thinker, Ian Glazer, has been discussing lately: Generational Change: Building Modern IAM Today (great talk at Authenticate 2025). His premise is simple: every major shift in how we manage identity begins with a crisis of accountability.
For our generation, at least in the U.S., that story starts twenty-five years ago with Enron. When the company collapsed under the weight of its own fraud, it triggered the Sarbanes-Oxley Act (SOX), a sweeping effort to rebuild public trust through enhanced oversight and auditing. SOX didn’t just transform corporate governance; it rewired the architecture of digital systems.
To prove they weren’t lying to their shareholders, companies had to prove who had access to financial systems, when they had it, and why. That single regulatory demand gave birth to the modern identity and access management (IAM) industry. User provisioning, quarterly access reviews, and separation-of-duties rules were not technical innovations for their own sake. They were compliance artifacts.
Accountability as a system requirement.
It worked, mostly. However, it also froze an entire generation of identity practices in a compliance mindset designed for static environments—the kind of world where servers sat in data centers, applications were monolithic, and auditors could literally count user accounts.
That world no longer exists. I’m not sure it ever really did, but it came close enough for compliance purposes.
Today’s infrastructure is a living network of APIs, ephemeral containers, and machine-to-machine connections. Permissions change constantly; roles are inferred rather than assigned. The SOX model of accountability—document-based, periodic, human-verified—cannot keep up with the speed and fluidity of digital operations.
Yet we still design our controls as if that old world were intact. Every new regulation borrows the same logic: prove compliance through evidence after the fact. In an API economy, that’s like measuring river depth with a snapshot.
This is the essence of what it now means to be a regulated industry: to operate in a world where compliance frameworks lag behind reality, and where the very act of proving control can undermine the flexibility that keeps systems running.
The challenge ahead is to re-imagine accountability for systems that no human can fully audit anymore.
Who decides what’s critical
The SOX era showed us what happens when accountability becomes infrastructure. Once something is declared essential to the public good, the expectations around it change. Auditors appear. Processes multiply. Documentation becomes proof of virtue. The thing itself—energy, banking, cloud, identity—may not inherently change, but the burden of accountability does, and how that accountability is handled influences how much innovation is allowed to happen.
That’s the quiet tension at the heart of every critical infrastructure discussion: as systems become more indispensable, the scrutiny around them intensifies. The stakes rise, and so do the checklists.
When oversight can’t keep up
At the top of that pyramid sits government. Regulation is, in theory, how society expresses its collective expectations—how we agree that safety and reliability matter more than speed or convenience. But in practice, the model of oversight we keep reusing comes from a slower era: a world where an inspector could show up with a clipboard, verify that the valves were turned the right way, and sign off.
In digital infrastructure, that model doesn’t scale. Governments can loom over the industry’s figurative shoulder, but they can’t keep up with its velocity. The old rituals of control—compliance reports, annual audits, quarterly attestations—look increasingly ceremonial when infrastructure changes by the second.
And yet, the instinct to regulate through prescription persists. When governments define something as critical, they tend to follow with a detailed checklist of how to do the job, codifying procedures in the name of safety. It’s a natural response to risk, but one that struggles to survive contact with continuous deployment pipelines and automated policy engines.
So maybe the harder question isn’t what counts as critical, but how much definition it requires. Can we acknowledge essentiality without turning it into bureaucracy? Can we create accountability without demanding omniscience? What governments introduce is another common paradox: the need to be specific without actually suggesting what tools to use to get there from here.
If the first generation of regulation hard-coded accountability into organizations, the next will need to hard-code trust into systems—without pretending that trust can be reduced to a form.
Accountability without omniscience
Declaring something “critical” has always carried the weight of oversight. The assumption is that governments, or their proxies, can both understand and manage the risk. But as infrastructure becomes increasingly digital and interconnected, that assumption begins to fail.
The OECD’s Government at a Glance 2025 report states that most countries now recognize that infrastructure resilience demands a whole-of-government, system-wide approach—one that acknowledges interdependencies, information-sharing, and trust as policy instruments in their own right. Yet the governance structures built for power grids and pipelines aren’t well-suited to cloud platforms, APIs, or digital identity systems. The more critical these become, the less feasible it is for any single authority to monitor and manage every component.
That’s the paradox of modern accountability: the more connected systems get, the harder it is to define responsibility in a way that scales. The critical infrastructure lab and the Research Network on Internet Governance (REDE), funded by the Internet Society, made the case that sovereignty and resilience now depend less on control and more on coordination—on being able to share risk data and dependencies transparently across borders and sectors. In principle, it’s the right move. In practice, it’s a trust exercise that few institutions are prepared for.
The idea of distributed accountability sounds progressive, but it also has a familiar flaw. When everyone is accountable, no one is accountable. The result is a kind of modern Bystander Effect: every actor assumes someone else will notice, intervene, or fix the problem. The chain of command dissolves into a web of good intentions.
This is the point where governance runs into the limits of imagination. Most people—and most regulators—can picture centralized oversight. They can picture privatized responsibility. But a shared model of accountability, one that is collaborative without being amorphous, is much harder to design. And yet that’s exactly what digital infrastructure demands.
We don’t need omniscience. But we do need visibility, and a clear sense of who moves first when something goes wrong.
When visibility becomes control
The inability to imagine shared accountability has consequences. When coordination feels uncertain, governments reach for the tools they already understand: classification, jurisdiction, and control.
It’s an understandable impulse. No one wants to be caught watching a crisis unfold with no clear authority to act. So when infrastructure becomes essential, the default response is to anchor it to sovereignty—to say, “this part of the network belongs to us.” Visibility becomes control.
But this is also where the governance model for critical infrastructure collides with the architecture of the Internet. Digital systems don’t map neatly onto national boundaries, and yet the instinct to assert jurisdiction persists.
The European Union’s NIS2 and CER directives, the United States’ NSM-22, India’s CERT-In, and a growing list of regional cybersecurity laws all share the same structure: protect the systems that matter most within the territory you can regulate.
Each framework makes sense in isolation. Together, they create a patchwork of compliance zones that overlap but rarely align. The more governments move to secure their slice of the Internet, the more the global system fragments. Resilience becomes something you defend domestically rather than something you coordinate internationally.
There are various ways to interpret this, as scholars like Niels ten Oever and others exploring “infrastructural power” have noted. I’ll refer to it as the politics of dependencies—states now manage risk not only by building capacity, but by redefining what (and whom) they depend on. It’s a rational strategy in an interdependent world, but it comes with trade-offs. Limiting dependency also limits collaboration. A jurisdiction that can’t tolerate external risk soon finds itself isolated from shared innovation and shared recovery.
This is what makes the regulator’s dilemma so difficult. The very act of governing risk can create new vulnerabilities. The more states assert control over digital infrastructure, the more brittle global interdependence becomes. And yet, doing nothing isn’t an option either.
Doing something (without breaking everything)
If doing nothing isn’t an option, what does doing something look like?
The compliance model
The easiest path is the one we know: expand the existing machinery of audits, attestations, and oversight. This approach offers the comfort of processes we already know, which provide defined responsibilities, measurable outcomes, and the illusion of control.
Unfortunately, as discussed, the compliance model is self-limiting. Checklists don’t scale well when the systems they’re meant to protect evolve faster than the paperwork. It works locally but drags globally, slowing innovation in the name of assurance.
The sovereignty model
The second path is already well underway. Nations reassert control by treating digital infrastructure as a means of asserting sovereignty. Clouds become domestic. Data stays home. Dependencies are pruned in the name of national security.
This approach can strengthen internal resilience, but only within its own borders. The cost of the sovereignty model is interoperability. The more countries pursue sovereign safety, the more brittle cross-border systems become, and the more the Internet looks like a federation of incompatible jurisdictions.
The coordination model (and its limits)
The third path—shared coordination—remains the ideal of any globalist like me, but it’s also the least likely.
True collaboration demands transparency, and transparency means exposing dependencies that are strategically or commercially sensitive. In a world leaning toward self-protection, that kind of openness is rare.
So coordination won’t vanish, but it will devolve by shifting from global alignment to regional or sector-based pacts, where trust is built within smaller, semi-compatible networks. That’s not the open Internet we once imagined, but it may be the one we have to learn to live with.
Each of these paths has trade-offs.
Compliance centralizes process. Sovereignty centralizes power. Coordination, when it happens, centralizes trust, and that is becoming a scarce resource. The challenge now is not to prevent fragmentation, but to make it survivable.
The next generation of accountability
Ian Glazer’s idea of Generational Change in Identity has changed how I see the evolution of regulation and infrastructure. Every generation inherits a crisis it didn’t design and a set of controls that no longer fit.
For ours, that crisis isn’t fraud or corporate malfeasance. It’s fragility; the uneasy recognition that we’ve built systems so interdependent that no one can fully explain how they work, let alone govern them coherently.
If the last generation of regulation was born from the failure of a few companies, the next will emerge from the failure of shared systems. We need to start thinking about what kind of accountability we design in response. Do we double down on compliance and centralization (something that would make me Very Sad), or accept that resilience must now be negotiated—sometimes awkwardly, sometimes locally—among the people and institutions who can still see each other across the network?
We may not get a global framework this time. We may get overlapping, regionally aligned regimes that reflect the trade-offs each society is willing to make between openness, autonomy, and control. That’s not necessarily a failure. It’s the kind of adaptation that complex systems make when they outgrow the rules that shaped them.
If the last generation of accountability was about proving control, the next must be about sharing it: building systems where visibility replaces omniscience, and cooperation replaces the illusion of total oversight.
That’s the generational change we’re living through: the slow shift from auditing the past to governing the very immediate present. And if we’re good, we’ll learn to design for accountability the way we once designed for uptime.
📩 If you’d rather track the blog than the podcast, I have an option for you! Subscribe to get a notification when new blog posts go live. No spam, just announcements of new posts. [Subscribe here]
Transcript
Introduction
[00:00:30] Welcome back.
In my last post, The Paradox of Protection, I argued that resilience requires choice — that some dependencies matter far more than others. And pretending otherwise only spreads resources too thin.
I also joked — particularly in the written post — that I was very glad I personally did not have to make those decisions.
But of course, the truth is that we all do.
Every time we vote, renew a passport, or trust a digital service, we participate in a system of priorities. Someone, somewhere, has already decided what counts as critical and what doesn’t.
That realization sent me down a research rabbit hole:
Who decides what’s critical — and what happens once something earns that label?
Because the moment a system becomes critical, it doesn’t just get protection.
It gets rules, institutions, oversight, and redefined accountability.
The Regulator’s Dilemma
[00:01:24] Accountability becomes surprisingly rigid.
[00:01:28] Today, let’s talk about the regulator’s dilemma — and how the very act of governing risk creates new vulnerabilities.
A few weeks ago, my friend and longtime identity thinker Ian Glaser gave a Talk@Authenticate 2025 called Generational Building: Modern IAM Today. One idea from his opening really stuck with me:
Every major shift in how we manage identity begins with a crisis of accountability.
[00:02:01] For our generation — the last 25 years — that story begins with Enron.
Its collapse triggered the Sarbanes–Oxley Act (SOX), a sweeping effort to rebuild trust through oversight and auditing. And SOX didn’t just transform corporate governance — it rewired digital architecture.
To prove honesty to shareholders, companies had to prove:
- Who had access
- When they had it
- And why
That requirement gave birth to modern identity and access management (IAM).
Key IAM practices that emerged:
- User provisioning
- Quarterly access reviews
- Separation of duties
- Access certification
Not because they were fun technical innovations, but because compliance required them.
And to be fair, it worked. Mostly.
But it also froze an entire generation of identity practices into a compliance mindset built for:
- Static servers
- Data centers
- Monolithic applications
- Human auditors counting accounts
A world that no longer exists.
The Accountability Lag
[00:03:21] Today’s infrastructure is a living network — APIs, ephemeral containers, and machine-to-machine connections.
[00:03:28] Permissions shift constantly.
Roles are inferred.
Identity is dynamic.
[00:03:33] And yet we still audit like it’s 2005.
The SOX model — document-based, periodic, human-verified — cannot keep up with the speed of cloud and automation.
Ironically, the more we try to prove control… the more we slow down the systems we’re trying to protect.
Compliance starts to undermine resilience.
It’s like measuring a rushing river using a still photograph.
You can capture the moment — but not the motion.
Once something becomes critical, auditors appear, processes multiply, and documentation becomes proof of virtue. Not capability.
The Weight of Critical Infrastructure
[00:04:42] As systems become indispensable, scrutiny intensifies.
[00:04:47] Stakes rise.
[00:04:50] Checklists expand.
At the top of all this sits government regulation — society’s way of expressing collective expectations around safety and reliability.
In theory, it’s how we prioritize the public good.
In practice, oversight is modeled on a world where inspectors carried clipboards and verified valves.
[00:05:21] That model doesn’t scale to digital infrastructure.
Government can loom over an industry’s shoulder — but it cannot see fast enough or deep enough to match the pace of automation.
[00:05:31] Annual audits and quarterly attestations become ceremonial when infrastructure shifts every second.
And yet the instinct to regulate through prescriptive checklists persists.
But prescriptive rules do not survive contact with:
- Continuous deployment
- API-driven systems
- Automated policy engines
So the real question becomes:
[00:06:01] What truly counts as critical — and how much definition is necessary?
Can we acknowledge essentiality without creating bureaucratic drag?
Can we create accountability without pretending omniscience?
When Accountability Becomes Trust
[00:06:25] If the last generation of regulation hard-coded accountability into organizations…
[00:06:30] The next will have to hard-code trust into systems.
But not the kind of trust that can be reduced to a form or a checklist.
[00:06:39] Declaring something critical always invites oversight — and assumes governments can understand and manage the risk.
Increasingly, they can’t.
The OECD’s Government at a Glance 2025 notes that resilience now demands a whole-of-government approach, treating information sharing and trust as policy instruments.
Yet our governance frameworks were built for:
- Power grids
- Pipelines
- Physical infrastructure
Not cloud platforms, APIs, or digital identity.
[00:07:15] The more critical digital systems become, the less feasible it is for any single authority to monitor them.
It’s the paradox of modern accountability.
[00:07:23] More connectivity = harder definitions of responsibility.
Research from the Critical Infrastructure Lab and the Internet Society’s REED Project highlights this shift:
Resilience now depends less on control and more on coordination.
But coordination has a flaw:
When everyone is accountable, no one is accountable.
Governance as a Trust Exercise
[00:08:03] When every actor assumes someone else will intervene, the chain of command dissolves.
[00:08:10] Governance then becomes imagination:
Regulators can picture centralization.
They can picture privatization.
But shared accountability — structured collaboration — is harder to design.
Yet digital infrastructure demands exactly that.
We don’t need omniscience.
We need visibility and clarity about who moves first when something goes wrong.
When coordination feels uncertain, governments default to familiar tools:
- Classification
- Jurisdiction
- Control
Because no one wants to watch a crisis unfold without clear authority to act.
Sovereignty and Fragmentation
[00:08:50] When infrastructure becomes essential, governments anchor to sovereignty:
“This part of the network is ours.”
But digital systems ignore borders.
Still, the instinct persists.
The result is a wave of regional cybersecurity laws:
- EU NIS2
- U.S. NSM-22
- India’s CERT rules
- Regional data residency mandates
Each makes sense alone.
Together, they form a patchwork of compliance zones that rarely align.
The more governments secure their slice of the Internet, the more the global system fragments.
Resilience becomes domestic instead of international.
The Three Paths of Modern Governance
[00:09:55] When doing nothing isn’t an option, what does doing something look like?
There are three paths:
Compliance
Comfortable, measurable, familiar — but self-limiting.
Checklists don’t scale.
Paperwork slows innovation.
Sovereignty
Domestic clouds. Localized data.
Strengthens internal resilience but fractures interoperability.
Coordination
Shared governance, mutual visibility, collective risk management.
Globally the best path — but increasingly rare because it requires uncomfortable transparency.
And transparency exposes dependencies that many institutions don’t want exposed.
So coordination shrinks:
- From global to regional
- From universal to sectoral
- From open to semi-compatible
[00:11:29] Not the Internet we imagined.
But possibly the one we have to live with.
The Generational Shift Ahead
Each governance path centralizes something:
- Compliance → process
- Sovereignty → power
- Coordination → trust
And trust is scarce.
This brings us back to Internet fragmentation:
We cannot prevent fragmentation, but we can make it survivable.
Identity governance is a generational story.
Each generation inherits a crisis it didn’t design and controls that don’t fit anymore.
The crisis today isn’t fraud.
It’s fragility — and our recognition that our systems are too interdependent to fully understand.
[00:12:32] The next regulatory wave will emerge from failures in shared systems.
We must choose what kind of accountability to design:
- Double down on compliance and centralization?
- Or negotiate resilience — sometimes awkwardly, sometimes locally — with the people who can still see each other across the network?
Likely we’ll see:
- Overlapping
- Regionally aligned
- Sector-specific
Regimes that reflect societal trade-offs between openness, autonomy, and control.
This isn’t failure.
It’s adaptation.
From Proving Control to Sharing It
If the last generation of accountability was about proving control…
The next must be about sharing it.
Building systems where:
- Visibility replaces omniscience
- Cooperation replaces total oversight
- Resilience is negotiated, not dictated
This is the shift from auditing the past to governing the present.
[00:13:41] And if we’re good — if we learn from trade-offs without repeating them — we might design accountability the way we once designed for uptime.
We’ll see how it goes.
Closing Thoughts
There’s more in the written blog, including research links that informed this episode.
If you’d like to reflect or push back, I’d love to continue the conversation.
[00:14:07] Have a great rest of your day.
Outro
If this episode helped make things clearer — or at least more interesting — share it with a friend or colleague.
Connect with me on LinkedIn: @hlflanagan
And if you enjoyed the show, please subscribe and leave a rating and review on your favorite podcast platform.
You’ll find the full written post at sphericalcowconsulting.com.
Stay curious. Stay engaged. Let’s keep these conversations going.
