The Trust Layer Nobody Asked For (But Everyone Needs)
How Mastercard's open-source Verifiable Intent protocol creates cryptographic trust for AI agent payments and autonomous commerce
Earlier this month, Mastercard open-sourced something that should have made more noise than it did.
They called it Verifiable Intent. The pitch: a cryptographic proof layer that records exactly what a consumer authorized an AI agent to do, stamps it, and makes it available to every party in the transaction — merchant, bank, consumer. If the agent goes rogue, you have receipts.
Google co-signed. Fiserv, IBM, Checkout.com, and Basis Theory all committed to supporting it.
And most builders didn’t notice, because it sounds like enterprise plumbing.
It’s not. It’s the missing foundation for every agent-to-agent transaction you’re going to build in the next eighteen months.
The Problem Is Older Than You Think
When you tap your credit card, intent is obvious. You’re standing at the counter. You see the price. You make a decision and physically execute it. The entire trust model of modern commerce was built on that visibility.
Agents break it.
When your AI assistant reorders groceries, books a flight within a budget, or finds the cheapest replacement part for your dishwasher, intent becomes invisible at the exact moment it matters most. The agent interpreted your instructions. It evaluated options. It made a choice. And then it spent your money.
What did you actually authorize? “Buy groceries” or “buy these specific groceries at this specific price from this specific store”? The gap between those two is where disputes, fraud, and broken trust live.
Mastercard’s CDO Pablo Fourez put it plainly: “As autonomy increases, trust cannot be implied. It must be proven.”
What Verifiable Intent Actually Does
Strip away the press release language and the mechanism is straightforward:
It links three things into a single tamper-resistant record:
Your identity — who authorized the transaction
Your instructions — what you told the agent to do, with constraints
The outcome — what actually happened
The spec uses Selective Disclosure, meaning each party in the transaction only sees the minimum information they need. The merchant knows the agent is authorized. The bank knows the transaction is legitimate. But neither sees more than necessary.
It’s protocol-agnostic by design. It works alongside Google’s Agent Payments Protocol (AP2), the Universal Commerce Protocol (UCP), and whatever else emerges. Mastercard isn’t trying to own the stack. They’re trying to be the trust layer underneath everyone else’s stack.
Why Solo Builders Should Care
You might think this is Visa-vs-Mastercard territory, irrelevant to anyone without a payments team. Wrong.
If you’re building anything that agents will eventually interact with — a SaaS, an e-commerce store, a marketplace, a booking tool — the trust question is already yours to answer:
How does your product prove an agent was authorized to act?
Right now, the answer for most products is “it has an API key.” That’s authentication, not authorization. An API key proves the agent can connect. It says nothing about whether the human behind it approved this specific action at this specific price.
Verifiable Intent creates a standard way to answer that question. And standards that solve real pain tend to become requirements fast. Think PCI compliance, but for agent transactions.
The builders who integrate this early won’t just be “compliant.” They’ll be the ones agents are routed to, because trust infrastructure becomes a competitive advantage when every agent is evaluating vendors programmatically.
The Bigger Signal
Zoom out from the spec itself and look at who’s in the room. Mastercard and Google aren’t building this because agent commerce is a nice research project. They’re building it because they see the transaction volume coming and they know the current trust model won’t survive it.
Here’s the pattern:
2024: Mastercard launches Agent Pay — infrastructure for registering and authenticating AI agents before they transact.
2025: Google ships AP2 and UCP — protocols for how agents discover and interact with merchants.
2026: Verifiable Intent — the proof layer that makes all of it auditable.
Each layer builds on the last. Authentication → interaction → proof. That’s not a roadmap. That’s a stack. And the stack is almost complete.
The missing piece isn’t technology. It’s adoption. Someone has to build the first wave of agent-ready products that actually use these protocols. Someone has to prove the model works at the long tail, not just at enterprise scale.
That someone is probably reading a newsletter like this one.
What to Build
If you’re a solo builder watching this space, three moves matter now:
1. Make your product agent-discoverable. This means structured data, machine-readable product info, and clear action schemas. WebMCP (launching in Chrome 146) is the browser-native version of this. llms.txt is the static version. Both matter.
2. Implement authorization, not just authentication. When an agent calls your API, can you verify what the human behind it actually approved? Start thinking about scoped tokens, intent records, and transaction-level permissions.
3. Build for dispute resolution from day one. Every agent transaction is a potential dispute. The builders who log intent, action, and outcome in a verifiable format will resolve disputes in minutes. Everyone else will resolve them in court.
The trust layer is being built right now, in public, by people who move slowly and carefully because they’re handling money. That’s the opposite of how most of the AI ecosystem moves.
But that’s exactly why it matters. The fast-moving part of the stack — the agents, the models, the chat interfaces — is sexy. The slow-moving part — the trust, the proof, the accountability — is what makes all of it actually work.
Move slow and prove things.


