Not Just a Technical Problem: Why Fighting Disinformation Needs Resilient Infrastructure
“Disinformation. Misinformation. Malinformation. These terms get used interchangeably, but they’re not the same thing.”
That distinction matters when designing resilient infrastructure that supports trust.
- Misinformation is false or misleading information shared without intent to deceive.
- Disinformation is deliberately deceptive content, often politically or financially motivated.
- Malinformation is factual information used out of context to cause harm.
Most of our efforts to address these problems focus on content, activities like fact-checking, moderation, and takedown requests. And those are important. But after sitting through multiple sessions at WSIS+20 last month, I came away thinking about the architectures that enable or undermine digital trust in the first place. (Did you see my post last week on learnings from WSIS+20?)
Remember, trust doesn’t start with content. It actually starts with infrastructure.
The people in those WSIS+20 rooms weren’t talking about disinformation in the abstract. They were talking about humanitarian workers in the field, where timely, accurate, and secure information can be a matter of life and death. They talked about public health campaigns, peacekeeping missions, and journalists trying to survive in an environment where lies move faster than truth. And in almost every session, it became clear that the technical underpinnings of the Internet—especially in crisis and conflict settings—are being overlooked.
You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.
And be sure to leave me a Rating and Review!
Identity is part of the equation
While identity wasn’t explicitly discussed in these sessions, it’s a critical part of establishing authenticity, which in turn helps build trust. IAM systems can’t prevent disinformation, but they can help validate source integrity and support accountability.
- Verified senders can be identified without compromising privacy.
- Digital credentials can establish provenance for content or data. (Shout out to the C2PA work here!)
- Attribute-based access can help ensure information reaches the right people in the right roles.
I’m not promoting centralized control or surveillance. What I want is to build confidence in the systems we rely on to make decisions, especially in high-stakes environments.
Disinformation and infrastructure resilience
Something I thought about as I settled down with my notes after the event, though it wasn’t phrased this way quite like this during any of the sessions: When infrastructure fails, it doesn’t just disrupt services; it disrupts the foundation of trust that identity and information systems rely on. Several sessions at WSIS+20 focused on resilient digital infrastructure, especially in the context of sustainability and the UN’s 2030 Agenda. Speakers from IEEE, CERN, and disaster risk reduction agencies reminded us that resilience is more than just a technical property; it’s what enables everything. Disinformation thrives when infrastructure fails. That includes failures of availability, integrity, and interoperability. When identity systems falter, the ability to authenticate sources, validate messages, and maintain digital trust during crisis response suffers, too.
- Digital infrastructure often isn’t designed to serve people in remote or underserved areas.
- Technical standards don’t always account for multilingual or multi-platform accessibility.
- Short-term, market-driven decisions prioritize scalability over long-term resilience.
Standards developers and IAM professionals know this at a technical level. Heck, I wrote about this a few weeks ago in a post on resilience in standards. But what’s often missed is how infrastructure failure becomes a governance issue. When people lose trust in digital systems, they distrust more than just the failed platform. They also start to distrust institutions and even each other.
Resilience isn’t for other people
IAM systems face similar challenges: do we build for edge cases, or optimize for the majority? Whose threat model are we prioritizing? How do we balance user experience with verifiability?
Just to make it more complicated, there is the fact that technology designed to protect can also exclude.
- Overly strict verification requirements can lock out vulnerable populations.
- Misapplied protections can be used to suppress journalism or advocacy.
- “Safety” features can become surveillance tools in the wrong hands.
Even well-intentioned systems can marginalize people when their design doesn’t include a wide range of needs and experiences.
If we want to fight disinformation at scale, we need to stop thinking of it as just a content problem. It’s an infrastructure problem. And digital identity experts and standards architects have a role to play.
Closing the loop: From resilience back to disinformation
The sections above touched on how resilient, inclusive infrastructure supports digital trust. But let’s not lose sight of the central theme: disinformation. It spreads most easily where infrastructure is brittle, trust is low, and identity signals are weak or absent. That’s why the work of IAM professionals and standards developers matters—not just for security or compliance, but for defending the conditions in which truth can survive.
So, what can identity professionals do?
I love it when a plan comes together, and the plan here is to think about fighting disinformation and improving the resilience of our systems.
- Treat resilience as a design goal: Build IAM systems that account for low-connectivity, low-trust environments.
- Make authenticity an architectural concern: Support verifiable claims, provenance metadata, and strong-but-private identifiers.
- Engage in governance conversations: Push for feedback loops between standards bodies, policymakers, and civil society. Ask who is being served and who isn’t.
And what can standards architects do?
- Define and document trust assumptions: Clearly state what the system assumes about message integrity, source authenticity, and the broader infrastructure. Make those assumptions visible and testable.
- Design for degraded conditions: Create standards that support verifiability even when connectivity is intermittent, metadata is partial, or infrastructure is compromised.
- Include threat models beyond fraud: Consider disinformation campaigns, information suppression, and adversarial use of identity signals in your threat models.
- Build consultation into the process: Include journalists, humanitarian responders, civil society groups, and policy experts in standards development. Their use cases will expand your view of what “interoperable” and “resilient” really mean.
Building for trust means building for everyone
Trust isn’t just about whether users believe your system is secure. It’s about whether they believe the Internet is still a place where truth can be found and relied upon. That belief erodes when digital systems exclude marginalized, underserved, and underrepresented users, whose experiences and threat models are often left out of design decisions. And that erosion creates fertile ground for disinformation, misinformation, and malinformation to take root.
This connection wasn’t made explicitly in the WSIS+20 sessions, but it became clear to me: trust in digital systems isn’t separate from trust in public discourse. If we want to defend the truth, we have to build systems that serve the whole public, not just the easy parts of it.
If we want to fight disinformation at scale, we need to stop thinking of it as just a content problem. It’s an infrastructure problem, and identity has a role to play.
This work is messy. It spans disciplines, sectors, and priorities. But if we want trustworthy systems, we have to build them with and for the people who rely on them most. That starts with looking beyond our immediate use cases and asking harder questions about who benefits, who’s left out, and what it means to build for trust in a world where truth itself is contested.
📩 Want to stay updated when a new post comes out? I write about digital identity and related standards—because someone has to keep track of all this! Subscribe to get a notification when new blog posts and their audioblog counterparts go live. No spam, just announcements of new posts. [Subscribe here]
Transcript
Welcome to the Digital Identity Digest
[00:00:04]
Welcome to the Digital Identity Digest, the audio companion to the blog at Spherical Cow Consulting. I’m Heather Flanagan, and every week I break down interesting topics in the field of digital identity — from credentials and standards to browser weirdness and policy twists.
If you work with digital identity but don’t have time to follow every specification or hype cycle, you’re in the right place.
Let’s get into it.
What Is Disinformation, Really?
[00:00:29]
Disinformation. Misinformation. Malinformation.
They may sound similar, but these terms have crucial differences. And if we want to design digital systems that truly support trust and accountability, those differences matter.
This week, I’m sharing an unexpected takeaway from my time at WSIS+20 in Geneva. I left that event with a strong belief that disinformation isn’t just a content problem — it’s an infrastructure problem.
And that infrastructure includes identity.
Defining the Terms
[00:01:05]
Let’s start with some clear definitions — because words matter.
- Misinformation is false or misleading information shared without intent to deceive. Think: hearing a rumor and passing it along without realizing it’s untrue.
- Disinformation is intentionally deceptive, crafted and spread to influence behavior or opinion, often politically or financially.
- Malinformation is true, but used maliciously — like doxing someone or leaking sensitive context to cause harm.
Most efforts to combat these focus on content — fact-checking, takedowns, moderation policies. And that work is vital.
But what I heard in the WSIS sessions wasn’t just about policies. It was about digital infrastructure.
Real-World Impact: Why Infrastructure Matters
[00:02:00]
Here are a few stories that helped this hit home:
- Humanitarian workers struggling to communicate securely in conflict zones.
- Journalists fighting to survive and tell the truth amid algorithmic lies.
- Peacekeeping missions and public health campaigns racing to get accurate information out before disinformation spreads faster.
In all these cases, trust didn’t hinge on whether someone flagged a tweet. It depended on whether the underlying systems could support or sabotage the truth.
Technical Failures Become Governance Failures
[00:03:02]
If your network goes down, people will turn to unofficial channels.
If your logs are incomplete or timestamps unverifiable, message integrity falls apart.
If your system can’t authenticate a sender, how do you know whether or not to act?
That’s not just a technical failure — it’s a governance failure.
And when people lose trust in digital systems, the consequences ripple outward:
- Trust in platforms erodes
- Trust in institutions falters
- Trust in each other breaks down
Where Identity Comes Into Play
[00:03:45]
Interestingly, identity wasn’t a primary topic in most disinformation sessions. But it kept showing up — just at the edges.
Because when you ask:
- Who sent this message?
- Has it been tampered with?
- Is this authentic?
You’re really asking identity questions.
Identity systems can help us answer those questions without sacrificing privacy, by:
- Establishing provenance
- Enabling verified senders to be trusted faster
- Supporting credentials that show who said what, and when
- Ensuring information flows to the right people at the right time
While identity and access management alone can’t solve the disinformation crisis, they’re essential tools in restoring trust in the systems where that information travels.
Designing for True Resilience
[00:04:50]
Another recurring theme at WSIS+20 was resilience. Not just uptime and backups — real resilience.
How systems perform in messy, unpredictable, even dangerous environments.
Sessions on sustainability, infrastructure, and disaster response included speakers from IEEE, CERN, physicists, and others who manage risk daily.
One takeaway stuck with me:
“Resilience isn’t just technical — it’s a social contract.”
When resilience breaks down, we’re breaking that contract. We’re designing for the well-connected, the resourced, the mainstream — not for:
- Remote communities
- Multilingual populations
- Low-trust or high-risk environments
And identity systems? They struggle with this all the time.
Exclusion Creates Fertile Ground for Disinformation
[00:06:02]
Strict verification protects against fraud. But what if you’re a displaced person without documents?
In trying to protect, we often exclude. And where people are excluded, disinformation grows.
Because people turn to what’s available. If trustworthy systems aren’t available — or don’t work for them — they’ll turn to anything that is.
So bringing this back full circle, disinformation thrives where:
- Systems can’t verify sources
- Users don’t trust what they see
- Infrastructure fails or excludes
If your digital trust infrastructure — identity included — only works in ideal conditions, then you’ve built perfect conditions for disinformation.
Why Identity Standards Matter
[00:07:01]
Identity and access management (IAM) standards matter because they define the defaults.
They determine:
- What’s interoperable
- What can be verified
- Whether truth can be seen, heard, and trusted
So if you’re an identity professional, what can you actually do?
What Identity Professionals Can Do
[00:07:25]
Here are some tangible steps to start with:
- Treat resilience as a design goal
Consider low-connectivity and low-trust environments. Build for those, too. - Make authenticity an architectural concern
Support verifiable claims, embed provenance, and use privacy-preserving identifiers. - Engage in governance conversations
Don’t outsource this to policymakers. Collaborate with:- Standards groups
- Civil society
- Policymakers
- Employers
Ask hard questions:
Who’s being served? Who’s being left out?
For Standards Architects: You Are My People
[00:08:20]
If you work on protocols, specs, or standards, here’s your to-do list:
- Define layout and trust assumptions
Spell out what the system presumes about message integrity and infrastructure. - Design for degraded conditions
Don’t assume perfect metadata or nonstop uptime. - Think beyond fraud
Include disinformation, suppression, and misuse in your threat models. - Build consultation into the process
Bring in:- Journalists
- Emergency responders
- Civil society leaders
Their use cases will expand your understanding and improve your solutions.
Closing Thoughts: Trust as a Design Mandate
[00:09:30]
Trust isn’t just about security. It’s about whether people believe in digital systems at all.
When systems exclude people — by design or by neglect — trust erodes.
And in that erosion, disinformation thrives.
That’s what stood out to me most at WSIS+20.
If we want to fight mis-, dis-, and malinformation, we can’t just treat it as a content problem.
We must treat it as an infrastructure problem.
And identity professionals and standards architects?
We’re part of the solution.
It’s messy work. Cross-disciplinary. Politically thorny. Often frustrating.
But if we want trustworthy systems, we must build them for everyone — not just the easy users.
So keep asking:
- Who’s benefiting?
- Who’s being left out?
Make it explicit. Even if it’s uncomfortable.
What does it really mean to build for trust, in a world where truth itself is constantly contested?
Food for thought. And thank you for listening.
Final Notes
[00:10:00]
If this helped make the complex a little clearer — or at least more interesting — share it with a friend or colleague.
Connect with me on LinkedIn @hlflanagan.
And if you enjoyed the show, subscribe and leave a review on Apple Podcasts…
[00:10:16]
…or wherever you listen.
[00:10:19]
You can also find the full written post at sphericalcowconsulting.com.
Stay curious. Stay engaged.
Let’s keep these conversations going.
