Are You Human? A Dive Into the Proof of Personhood Debate

Can we prove personhood and distinguish between humans and bots?

Are You Human? A Dive Into the Proof of Personhood Debate

I don’t think of myself as an expert in non-human identity (NHI). Instead, I’d say I’m NHI-curious and eager to share what I’m learning. Lately, I’ve been going down a rabbit hole about when and how to indicate if someone—or something—is human. I’m clearly not alone in asking this. Last year, I was one of many co-authors of a paper, Personhood Credentials: Artificial Intelligence and the Value of Privacy-Preserving Tools to Distinguish Who is Real Online, exploring these questions and challenges. Spoiler: we ended with a call for further discussion. (I’m happy to add that it won an award: The Future of Privacy Forum’s 15th Annual Privacy Papers for Policymakers Award recognizes influential privacy research, and this paper is on the list.)

🎙 Audio Version: If you’d rather listen than read, hit play below.

So, I turned to friends and fellow IDPro members and asked: Has the call to action in that paper to Do Something caught the eye of any standards body? I had hoped for a “yes, of course, go here,” but that’s not what I got. What happened was the continuation of an interesting debate. Some argued we shouldn’t need to distinguish humans from non-humans because all non-humans should be accountable to a human. Essentially, someone—a developer, a manager, whoever—should ultimately take responsibility for what their creations (AI agents, IoT devices, bots, etc.) do.

You may not think this falls under the discussions around NHI, but they do. You can’t definitively say “this is NHI” if you can’t also definitively say “this is human.”

Before we dive in further, let’s talk about a word that’s central to this discussion: personhood. It’s not without debate. Wikipedia describes it as a concept that’s been questioned in discussions about slavery, abortion, fetal rights, animal rights, corporate law, theology, and even Indigenous legal systems. Some see it as a way to bridge these different legal perspectives, while others challenge its use altogether.

I’m using the term here because it’s in the title of a research paper I co-authored, and frankly, I don’t have a better one. But it’s something we should be thinking about.

The Scalability Problem

I get the argument, but I have concerns. First, the responsibility model doesn’t scale. A developer might write the code, but what happens when that code spawns other processes, which spawn even more? Each may have different permissions and identifiers. Can one person realistically be held accountable for all of it? Really?

Second, consider third-party systems. They’ll want to distinguish between activities originating from a person versus a bot. This isn’t just about authentication (OAuth2 delegation, for example). It’s about making decisions based on what the entity is, independent of its actions.

These are just what comes to mind off the top of my head. I expect there are more that will come down to your risk appetite. Are you willing to constrain your environment to only what you know you can control? Before you just say “of course!” think really hard about your answer and how much time and energy you’re willing to put into backing it up.

Who’s Tackling This?

The short answer: kind of everyone and no one. We’re nowhere near a consensus or even an industry best practice. Some notable players:

  • Humanity Protocol recently launched a foundation to support its decentralized digital identity network.
  • Worldcoin and its WorldID system aim to establish proof of personhood (PoP) through biometric scans.
  • Idena’s Proof of Person Blockchain exists but seems to be struggling to find a sustainable business model. (You can read more about this effort in an academic paper published here.)
  • The ETHOS (Ethical Technology and Holistic Oversight System) framework was described in an academic paper published late in 2024; we’ll see if that gains any traction.

Each of these efforts faces similar hurdles—privacy concerns and business models. For example, if a credential verifies you as a person, does that mean someone is tracking your identity? And even if you trust the system, who covers the costs of implementing this infrastructure? What’s the ROI for businesses?

I expect more papers to be published on this topic and I’ll update this blog post as I hear about the good ones.

Beyond Technology: The Social Layer

The challenges aren’t purely technical. Social and governance issues loom just as large. Who decides what counts as “proof”? How do we build systems that protect privacy while maintaining accountability? And how do we fund these efforts in a way that’s fair and sustainable?

I’m keeping an eye on proof of personhood and hoping for a breakthrough that could standardize the approach. If you’re exploring similar questions or solutions, I’d love to hear your thoughts. We absolutely must get to an answer as we navigate this maze of personhood and accountability—human or otherwise. Though we’re not getting that answer today.

Curious about how I tackle real-world challenges in digital identity, standards development, and beyond? Check out my mini-case studies for insights and lessons from projects that showcase the strategies and skills I bring to the table. If you see something you’d like to explore further, let me know!

Heather Flanagan

Principal, Spherical Cow Consulting Founder, The Writer's Comfort Zone Translator of Geek to Human

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Spherical Cow Consulting

Subscribe now to keep reading and get access to the full archive.

Continue reading