“In my last couple of posts, I’ve been exploring a deceptively simple question: if browsers start acting on our behalf, are they still just ‘user agents’ in the traditional sense?”
The web architecture has long assumed that browsers mediate interactions between people and websites. A user clicks a link, submits a form, or approves a permission request, and the browser carries out those instructions.
But AI-enabled browsing is beginning to stretch that model by challenging those assumptions. It’s no longer a question of just showing the user the web. Instead, we’re seeing an overlap of showing the user the web AND potentially acting on their behalf.
If a browser can research products, compare prices, and navigate websites on your behalf, the next logical step is obvious: can it also buy the thing?
Increasingly, the answer appears to be yes.
A growing number of platforms are experimenting with what is now being called agentic commerce—systems where AI agents research, select, and sometimes even purchase items on behalf of a user.
The technology is still emerging, but the architectural questions it raises are arriving very quickly.
Because once AI agents start spending money, questions about identity, delegation, and liability move from ivory towers and intellectual possibilities to operational reality. Eep?
You can Subscribe and Listen to the Podcast on Apple Podcasts, or wherever you listen to Podcasts.
And be sure to leave me a Rating and Review!
The Early Experiments in Agentic Commerce
Several major technology platforms are already experimenting with ways for AI systems to assist with or complete purchases.
Amazon has introduced early versions of a “Buy for Me” capability that allows its shopping app to search for products outside the Amazon marketplace and place orders using stored payment information.
Google is integrating similar capabilities into its Gemini and AI search experiences, allowing users to identify products and complete purchases through Google Pay.
OpenAI’s Operator agent can interact with web pages directly, performing tasks such as purchasing items from online marketplaces or ordering groceries.
Perplexity’s “Buy with Pro” feature integrates search and purchasing directly inside its chat interface, while retailers like Walmart have deployed AI assistants that help manage product searches and shopping carts.
Browser-based tools and experimental agent frameworks are also beginning to automate tasks like booking flights, ordering groceries, or monitoring price changes.
At the moment, most of these systems still include a human-in-the-loop confirmation step before completing a transaction.
That said, it’s pretty obvious that AI systems are increasingly capable of acting as intermediaries in commerce. That creates a new layer in the architecture of online payments.
The Four-Corner Payment Model is Changing
Traditional online payments rely on a well-established structure often called the four-corner model.
In this model:
- A consumer interacts with a merchant;
- The merchant interacts with an acquiring bank;
- The acquiring bank communicates with a card network;
- The card network communicates with the issuing bank.
The consumer’s browser or mobile device acts primarily as an interface between the user and the merchant. When an AI agent performs the interaction, however, that structure shifts.
Instead of a direct interaction between a person and a merchant site, the flow may now look something like this:
User → AI Agent → Merchant → Payment Network
The agent may perform tasks such as discovering products, creating shopping carts, negotiating fulfillment details, or submitting payment requests.
Work discussed in the W3C Web Payments Security Interest Group notes that this shift affects multiple stages of a transaction, including product discovery, cart creation, authentication, and authorization. (I wrote about the WPSIG and other standards efforts in the web payments world a few months ago.)
In other words, the AI agent is not just helping the user browse. It is participating in the economic infrastructure of the web.
“Know Your Agent”
Once an AI agent can initiate financial transactions, merchants and payment networks need a way to answer a basic question:
Who is actually making the request?
Some early discussions in the payments and identity communities describe this idea as “Know Your Agent” (KYA), which extends the familiar “Know Your Customer” concept to autonomous systems acting on behalf of users. This involves verifying the AI agent’s identity, the authority granted to it, and the scope of actions it is allowed to perform.
In practice, this might involve cryptographically signed mandates, a tamper-proof digital contract specifying what an agent can do on behalf of a user.
For example, a mandate might allow an agent to:
- purchase groceries within a weekly budget
- reorder specific products automatically
- buy airline tickets within defined price limits
These mandates become the machine-readable expression of user intent, which is a very hot term these days. If you haven’t listened to Eve Maler on the subject, you’re missing out.
These expressions of user intent may ultimately serve as the evidence needed to demonstrate that a transaction was properly authorized.
Identity Becomes the Control Layer
At this point, the conversation inevitably intersects with digital identity.
If an AI agent interacts with multiple services and executes transactions on behalf of a user, the ecosystem needs a way to represent that relationship.
A white paper from the OpenID Foundation, Identity Management for Agentic AI, from October 2025, argues that AI agents should be treated as identifiable actors within identity systems rather than invisible extensions of the user.
Instead of impersonating users, agents should carry their own identity and prove delegated authority for the actions they perform. (I have to note here that delegation is still a not-entirely-solved problem.)
The authors note that when an AI agent interacts with external systems, it effectively behaves like a client application requesting access to resources, similar to any other software system accessing APIs.
That means the same foundational technologies used across the web today—OAuth, OpenID Connect, and related identity protocols—can serve as the starting point for securing agent interactions.
This model emphasizes true delegation rather than impersonation. Say it with me, people: impersonation is bad. It’s an anti-pattern we have to move away from. But I digress.
An AI agent should not appear indistinguishable from the user it represents. Instead, it should demonstrate:
- Who is it?
- Who authorized it?
- What scope of authority has been granted?
Without this distinction, accountability becomes extremely difficult.
And the problem becomes even more complex as agents begin interacting across multiple domains or spawning additional agents to complete tasks.
A Fragmented Future?
One of the risks highlighted in the OpenID work is the possibility of agent identity fragmentation.
If every platform invents its own proprietary system for identifying and authorizing agents, developers, and users could face the same interoperability challenges that plagued earlier identity systems.
This fragmentation would create inconsistent security models, incompatible delegation frameworks, and complex integration requirements. Not an ideal situation in an already complicated environment.
In other words, the web could end up with dozens of incompatible “agent identity” systems.
Standards organizations are already beginning to explore how to prevent this outcome, but the work is still in its early stages. In the meantime, the market is exploding with possibilities, dangers, and promises.
The Liability Question
While technical standards bodies are working through these identity and delegation models, another question looms in the background that might be the only thing holding back the explosion.
If an AI agent completes a transaction, who is responsible if something goes wrong?
Legal scholars are beginning to examine this problem.
A recent article in the University of Chicago Law Review Online argues that AI systems should be treated as “risky agents without intentions.”
Traditional legal frameworks assign responsibility based on intent, but AI systems do not have intentions in the legal sense. As a result, liability must be assigned to the humans or organizations that design, deploy, or authorize the system.
In practice, that could mean responsibility falling on several possible actors:
- the developer who built the agent
- the platform that deployed it
- the user who authorized it
- or the merchant who accepted the transaction
Legal analysis from firms such as Lathrop GPM suggests that liability in agentic systems may depend heavily on where the failure occurred.
Was the system poorly designed? Did the user grant overly broad authority? Did the merchant fail to verify the legitimacy of the agent?
These questions become particularly complex when AI systems behave unpredictably.
A New Kind of Dispute
Payment networks may soon encounter a new category of dispute. Historically, fraud investigations often begin with a simple claim:
“I didn’t make that purchase.”
In an agentic commerce environment, the dispute might instead be:
“My agent exceeded the authority I gave it.”
This represents a significant shift in how financial systems interpret responsibility.
Instead of determining whether a cardholder authorized a transaction, payment networks may need to determine whether an AI agent acted within the mandate assigned to it.
The W3C payments discussions highlight this emerging challenge explicitly: disputes could shift from traditional fraud claims toward arguments about whether an agent exceeded its delegated authority.
That raises important questions about evidence. What proof demonstrates that a user authorized an agent to perform a transaction? How is that authorization recorded? And who retains the evidence needed to resolve disputes?
So many implications, so little time.
Regulation Will Demand Traceability
Regulatory frameworks may accelerate the development of the mechanisms required to demonstrate intent and serve as legally admissible evidence.
Financial regulations already require strong evidence of user intent and authorization. For example:
- European payment regulations require strong customer authentication tied to a specific transaction.
- Privacy laws require demonstrable user consent and revocation mechanisms.
- Emerging AI regulations require traceability and human oversight for high-risk automated decisions.
In an agentic commerce environment, these requirements may translate into systems that record delegation mandates, transaction intent, agent identity, and consent artifacts.
In other words, the infrastructure supporting AI agents may need to produce verifiable evidence of user authorization as part of every transaction. Now we just need to build standardized mechanisms for that.
The Role of the Browser
Which brings us back to the question that started this series: Are AI-enabled browsers actually “web user agents” as defined by the web platform, and if they are, what changes when they start acting for us?
If AI agents increasingly act on the web, the browser may become a critical trust anchor for delegated actions.
My earlier posts (here and here) explored how AI-enabled browsers could evolve from passive mediators into proxies acting on behalf of users. Agentic commerce extends that idea further.
If a browser or browser-based agent performs transactions, it may also need to generate verifiable proof of user consent and intent.
Standards discussions have already begun exploring whether browser APIs could produce such artifacts—combining biometric authentication, signed assertions, and transaction details into evidence that a payment network or regulator could verify.
In that scenario, the browser becomes more than a viewing tool, a trend we’ve been seeing for years. But now it’s extending further into being a critical part of the trust infrastructure of the web economy.
A Familiar Pattern in a New Context
Agentic commerce can sometimes feel like an entirely new frontier. There are certainly a number of unsolved questions. But before you decide to move to a bunker and only use cash, recognize that the problems agentic commerce raises are surprisingly familiar.
The web has faced similar challenges before when establishing identity across services, representing delegated authority, proving user consent, and assigning liability when systems fail.
What makes the current moment interesting is the scale and autonomy implied by AI agents. Remember that AI is primarily an amplifier: it makes everything go faster. It doesn’t make anything new.
A system that can browse the web, interpret information, and complete transactions across multiple services introduces new layers of complexity. That said, it also builds on decades of identity, security, and payments infrastructure.
When the AI Buys It
The technology enabling agentic commerce is advancing quickly.
Major platforms are experimenting with automated purchasing capabilities. I linked to some of those earlier in this post. Standards organizations are beginning to examine delegation models. Identity groups are developing frameworks for agent authentication and authorization.
Meanwhile, legal scholars and regulators are beginning to grapple with a more basic question.
When an AI agent buys something online, who actually made the purchase?
The answer will depend on how we design the systems that allow those transactions to occur.
Which is why the work happening today across identity standards, web architecture, payments infrastructure, and legal frameworks is so important.
Because once AI agents start shopping, the web’s economic architecture will need to evolve to keep up.
📩 If you’d like to be notified of new posts rather than hoping you catch it on social media, I have an option for you! Subscribe to get a notification when new posts go live. No spam, just announcements of new posts. [Subscribe here]
Transcript
[00:00:00]
Welcome to the Digital Identity Digest, the audio companion to the blog at Spherical Cow Consulting.
Every week, this series explores key developments in digital identity—from credentials and standards to browser behavior and policy shifts. If you work in this space but don’t have time to track every emerging trend, this is your shortcut to staying informed.
So, let’s dive in.
From Browsing to Acting
[00:00:29]
In recent discussions, a central question has emerged:
What happens when software stops just showing us information—and starts acting on our behalf?
Traditionally, browsers have acted as intermediaries:
- You click links
- You submit forms
- You approve permissions
In short, it’s a one-to-one interaction model.
However, AI-enabled browsing is beginning to reshape that paradigm.
Now, browsers can:
- Research products
- Compare prices
- Navigate websites
- Fill out forms automatically
Naturally, the next step is clear:
Can they complete purchases too?
Increasingly, the answer is yes.
The Rise of AI-Powered Shopping
[00:01:07]
Today, major platforms are experimenting with agentic commerce systems, where AI agents can:
- Monitor prices in real time
- Build shopping carts
- Recommend products
- Even complete transactions
For now, most systems still include a human confirmation step.
However, the trajectory is unmistakable:
AI is moving from assisting users to actively participating in the web’s economic infrastructure.
And once software starts spending money on your behalf, things get interesting—fast.
Understanding the Payment Model Shift
[00:02:11]
Traditional online payments rely on a well-established structure known as the four-corner model:
- Consumer
- Merchant
- Acquiring bank
- Issuing bank (via payment networks)
In this system, your browser sits at the edge, acting as your interface.
But with AI agents, the interaction evolves into:
User → AI Agent → Merchant → Payment Network
This introduces a fundamental shift.
AI agents can now perform tasks previously handled by humans:
- Product discovery
- Price comparison
- Cart creation
- Shipping selection
- Payment submission
This raises a critical question:
Who is actually making the transaction?
Know Your Agent
[00:03:28]
To address this shift, a new concept is emerging:
Know Your Agent (KYA)
Similar to “Know Your Customer,” this approach focuses on verifying:
- Who created the agent
- Who authorized it
- What it is allowed to do
This is essential because merchants must now validate not just users—but also the software acting on their behalf.
Mandates and Machine-Readable Intent
[00:04:11]
One proposed solution is the concept of mandates.
A mandate is a machine-readable contract that defines what an AI agent is allowed to do.
For example, a mandate might allow an agent to:
- Reorder household supplies weekly
- Purchase flights under a set budget
- Buy groceries within spending limits
These mandates can be:
- Cryptographically signed
- Attached to transactions
- Used as proof of user authorization
In essence, mandates become the technical expression of user intent.
Identity for AI Agents
[00:04:54]
As AI agents interact across the web, they need a way to establish identity.
Rather than acting as invisible extensions of users, agents should:
- Have their own identities
- Prove who created them
- Demonstrate delegated authority
This approach mirrors existing systems like:
- OAuth
- OpenID Connect
However, there’s a key challenge:
Identity systems must now capture intent, not just authentication.
The Challenge of Delegation
[00:06:20]
Delegation is not new—but AI introduces new complexity.
Unlike traditional systems, AI agents:
- Are non-deterministic
- Interpret goals instead of executing fixed instructions
- Operate across multiple domains
This makes defining user intent far more difficult.
For instance:
If you ask an agent to “buy the best laptop,” what does that mean?
- Does it include refurbished options?
- International sellers?
- Alternative payment methods?
Humans navigate these ambiguities naturally.
Software does not.
Therefore, authorization models must become:
- More explicit
- More structured
- More precise
The Liability Problem
[00:07:34]
Now we arrive at a critical issue:
Liability
If an AI agent makes a purchase, who is responsible?
Possible parties include:
- The developer
- The platform
- The user
- The merchant
Responsibility depends on where failure occurs:
- Was the system poorly designed?
- Did the user grant excessive permissions?
- Did the merchant fail to verify the agent?
These questions are becoming increasingly important—and complex.
A New Kind of Payment Dispute
[00:09:06]
Historically, payment disputes have been straightforward:
“I didn’t make that purchase.”
However, in an agentic commerce world, the claim may shift to:
“My AI agent exceeded the authority I gave it.”
This introduces new requirements for evidence, including:
- Proof of user authorization
- Defined scope of agent permissions
- Verifiable transaction records
As a result, systems must generate:
- Mandates
- Authorization tokens
- Audit logs
These are no longer just technical artifacts—they become legal evidence.
Regulatory and Infrastructure Implications
[00:10:09]
Regulatory frameworks are already moving in this direction.
For example:
- Financial regulations require strong proof of authorization
- Privacy laws demand clear consent and revocation
- AI governance emphasizes traceability and oversight
In practice, this means systems must track:
- Agent identity
- Delegation permissions
- Transaction intent
- User consent
This leads to a significant increase in data—and responsibility.
The Evolving Role of the Browser
[00:10:56]
So where does this leave the web browser?
Traditionally, browsers:
- Retrieve content
- Present information
- Enforce security boundaries
However, in an agentic world, browsers may also:
- Generate cryptographic proof of consent
- Attach signed transaction assertions
- Manage mandates and delegation tokens
In this model, the browser becomes more than a viewing tool.
It becomes a core component of the web’s:
- Trust infrastructure
- Identity layer
- Economic system
Looking Ahead
[00:12:25]
Agentic commerce may feel new—but its challenges are familiar.
The web has long struggled with:
- Establishing identity
- Managing delegation
- Proving consent
- Assigning responsibility
What’s different now is the scale and autonomy of AI agents.
These systems introduce:
- Greater complexity
- Broader scope
- Higher stakes
However, they also build on decades of progress in:
- Identity systems
- Security frameworks
- Payment infrastructure
Final Thoughts
[00:13:06]
As AI agents become more capable, one question remains at the center:
When an AI agent buys something online, who actually made the purchase?
The answer will depend on how we design:
- Identity systems
- Payment architectures
- Legal frameworks
Because once AI agents start shopping, the web’s economic infrastructure must evolve to keep pace.
And right now, we are only beginning to understand what that evolution will look like.
Closing
[00:13:43]
This concludes the three-part series on AI as web user agents.
If you found this helpful or thought-provoking, consider sharing it with a colleague.
You can also connect on LinkedIn and explore more insights at sphericalcowconsulting.com.
Stay curious. Stay engaged. And keep the conversation going.

