What AI Agents Can Teach Us About Fraud in Consumer Identity

Building blocks of consumers and AI

What AI Agents Can Teach Us About Fraud in Consumer Identity

The irony with urgently questioning how to tell whether something is an AI or a person is the fact that we’re struggling just as much to distinguish humans from… well, other humans. This is, in fact, not a new problem at all. After writing about the AI vs Human issue in a previous post, I’m finding some interesting crossover back into the consumer IAM (CIAM) space. And since I’ll be talking at Authcon in May (a CIAM event), it’s time for me to start writing about that.

CIAM operates on a few basic assumptions:

  1. You are who you say you are.
  2. The device in your hand belongs to you.
  3. Once authenticated, your access is legitimate.

These assumptions break down when AI enters the chat. Not just AI-generated deepfakes or synthetic identities but also when identity systems are confronted with very human behavior: social engineering, shared devices, and fraud within trusted circles.

So, yeah: we’ve been discussing the same challenges in non-human identity and CIAM for a long time, and the industry still isn’t ready.

The “Friends & Family” Problem

Identity professionals love to talk about preventing fraud, but we usually focus on external threats: account takeovers, credential stuffing, or phishing. But what happens when the fraudster is someone the victim knows?

Your 10-year-old figures out your phone password and orders $300 of Roblox currency. Your spouse knows your device unlock code and, months after the breakup, logs in to access old messages. A caregiver managing an aging parent’s finances suddenly has full control over their retirement accounts. None of these people “hacked” anything—but they still bypassed authentication controls.

“Friends & family” fraud (closely related to second-party fraud) is where a close contact exploits a trusted relationship to gain unauthorized access. It’s a growing issue in banking, payments, and digital services. The protections we have today, such as FIDO2 passkeys, device-bound authentication, and biometric logins, are based on the assumption that the person holding the phone is the legitimate user. That’s great and probably correct most of the time. It is not, however, what we see in real-world use cases where:

  • Kids log in to their parents’ accounts to bypass spending limits.
  • Partners access each other’s devices for “convenience” but later use that access for harm.
  • Caregivers and family members manage logins for aging parents or disabled relatives.

Payment providers like Stripe report that fraud, including instances where the “victim” is complicit or unaware, is on the rise. But businesses still treat identity verification as largely a binary problem: Either you pass authentication or you don’t. There’s little room for contextual fraud signals that account for real-world relationships. But maybe that is changing.

What AI Agents Teach Us About Trust & Delegation

In the non-human identity world, we are learning to recognize that trust is fluid. AI agents, machine-to-machine (M2M) services, and API-based workloads (hopefully) authenticate using delegation models such as:

These models allow non-human agents to establish trust dynamically, adjusting permissions based on risk and usage patterns. Meanwhile, human identity systems remain largely static—once authenticated, users are granted broad access without continuous validation.


AI-driven fraud is shaking up identity proofing, but no single standard has emerged as the clear best practice. If your organization is figuring out which standards to follow—or how to engage in shaping them—I can help. 👉 See how I work or Let’s talk.


CIAM + AI Agent Match Up

This sounds a lot like what people have been promoting for other business sectors, including CIAM. The identity challenges addressed in machine-to-machine interactions—like delegation models, continuous authentication, and risk-based access—might offer useful lessons for consumer identity. Instead of treating authentication as a single, binary event, what if consumer identity systems incorporated the same dynamic trust models that AI agents and workloads are learning to use?

For example, let’s talk about a few of these best practices:

  • Time-boxed authentication: Require step-up verification if behavioral patterns change, similar to OAuth2 access token expiration. (There’s a lot to know about tokens; it’s a worthwhile rabbit hole to go down.)
  • Delegated identity models: Instead of just “logging in,” make it easy for users to grant role-based access to trusted individuals (like caregivers) with explicit constraints.
  • Shared signals integration: If a device suddenly authenticates multiple users in a short window, should the system flag that? (I really need to write more about shared signals in a future blog post. In the meantime, you might want to read the Shared Signals Guide.)

Standards & Real-World Adoption Challenges

To be fair, these ideas aren’t new. The pieces are already in place in various standards:

  • NIST SP 800-63-4 (granted, still in draft) acknowledges that “possession-based” authentication (passkeys, WebAuthn) should be paired with risk signals to mitigate fraud.
  • The OpenID Shared Signals Framework (SSF) (also still in draft) allows identity providers to exchange risk indicators across ecosystems.
  • ETSI TS 119 461 (not in draft; in fact, the most recent version was published in February 2025) defines identity-proofing best practices for high-trust authentication though it focuses more on initial onboarding than ongoing verification

But as much as I love standards, I am the first to admit that you can develop the best, most awesome, whizbang standard in the world, but if no one adopts it, you have nothing.

  • Tech companies are prioritizing user convenience over fraud controls.
  • Privacy regulations limit shared risk signals across platforms (GDPR, CCPA).
  • Legacy systems struggle to support continuous, adaptive authentication.

And so, we’re left with a system where strong authentication protects against external attackers, but leaves us vulnerable to the people we know.

Where Does CIAM Go From Here?

If AI agents can authenticate with conditional trust models, why can’t we? It’s time to stop treating authentication as a single yes/no event. We already know what to do. The only question is—are we ready to act?

  • Passkeys & biometrics aren’t a silver bullet. We need fraud models that account for who is using a device—not just that they authenticated successfully.
  • Shared signals must extend beyond traditional fraud detection. If identity systems could track delegation patterns, we could detect anomalies before damage occurs.
  • The industry needs to rethink “authentication” vs. “authorization.” Just because you logged in doesn’t mean you should have full access.

By now, CIAM is realizing that they have a problem when it comes to AI agents. The solution to those problems, however, should be a fairly familiar refrain of best practices we’ve been talking about for years. If we can say we’re going to learn from NHI as people approach it as a greener field (it isn’t, really, but tell that to the hype cycle), then let’s do it. Let’s take advantage of the energy around NHI and AI and do what we’ve always known we should. Fraud is only getting worse.

What do you think? Are passkeys and device-based authentication creating new fraud risks? Have we finally reached the point where the ROI for implementing these best practices is enough to make it a priority?

🚀 Get new posts straight to your inboxsubscribe here.

Heather Flanagan

Principal, Spherical Cow Consulting Founder, The Writer's Comfort Zone Translator of Geek to Human

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Spherical Cow Consulting

Subscribe now to keep reading and get access to the full archive.

Continue reading