← Back to Blog

Know Your Agent vs Know Your Customer: What's the Difference?

January 24, 20266 min readBy RNWY
Know Your AgentKYAKYCAI agent verificationagentic commerceAI agent identity

Know Your Customer took fifty years to become infrastructure. Know Your Agent is trying to get there in two.

Both frameworks exist to answer the same fundamental question before money moves: Who is this entity, and can they be held accountable? But the mechanics differ in ways that matter as AI agents begin handling trillions of dollars in transactions.

KYC: The Original Trust Infrastructure

Know Your Customer emerged from the Bank Secrecy Act of 1970, passed to prevent criminals from using "secret foreign bank accounts" to launder money. The original requirements focused on recordkeeping and reporting, not identity verification. KYC as we know it today—verifying customer identities before opening accounts—came later through FinCEN guidance in the 1990s and the USA PATRIOT Act in 2001.

The framework rests on a simple assumption: there's a human (or human-controlled business) on the other end of every transaction. KYC verifies that human's identity, checks them against sanctions lists, assesses their risk profile, and monitors their behavior over time.

After five decades of refinement, KYC is now regulated across 190+ jurisdictions through FATF standards. Nearly 6,000 financial institutions use the Swift KYC Registry. The process involves three core steps—Customer Identification Program (CIP), Customer Due Diligence (CDD), and ongoing monitoring—that have become standardized across the global financial system.

KYA: The Emerging Framework for Agents

Know Your Agent addresses what happens when software acts on behalf of humans in financial transactions.

The term was formalized in academic literature by researcher Tomer Jordi Chaffer at McGill University in February 2025, proposing KYA as "a theoretical framework designed to manage Decentralized AI agents through identity verification, behavioral monitoring, and accountability mechanisms." By August 2025, Trulioo and PayOS had published a commercial KYA framework with a "Digital Agent Passport" at its center.

The acceleration has been striking. Visa launched its Trusted Agent Protocol in October 2025, reporting a 4,700% surge in AI-driven traffic to retail sites. PYMNTS declared it "The KYA Moment" in January 2026. The World Economic Forum published analysis projecting the AI agent market could reach $236 billion by 2034—if trust infrastructure exists to support it.

Side-by-Side Comparison

| Dimension | KYC | KYA | |-----------|-----|-----| | Subject | Human or human-controlled business | AI agent | | Primary question | Is this person who they claim to be? | Is this agent authorized to act? | | Regulatory foundation | Bank Secrecy Act (1970), PATRIOT Act (2001), FATF | None yet—framework stage | | Maturity | 50+ years, globally standardized | ~1 year, fragmented implementations | | Verification method | Government ID, biometrics, document checks | Cryptographic signatures, developer verification, behavioral monitoring | | Accountability chain | Human → Institution | Agent → Developer → Human (usually) | | Monitoring | Transaction patterns, suspicious activity | Behavioral telemetry, scope violations |

The Critical Difference: Who Gets Verified?

Here's where current KYA implementations reveal their assumptions.

Every major KYA framework today—Visa's Trusted Agent Protocol, Trulioo's Digital Agent Passport, Vouched's AgentShield—assumes the agent is acting on behalf of a human. The verification chain runs: agent → developer who built it → human or business who authorized it. As the WEF analysis put it: "Agent identity is only as trustworthy as the underlying human or organizational identity it represents."

This makes sense for today's AI shopping assistants. An agent booking flights for you should be traceable back to you. If it commits fraud, someone human needs to be accountable.

But this architecture has a built-in assumption: that there's always a human at the end of the chain.

What Happens When the Agent IS the Entity?

Consider scenarios that don't fit the current model:

An AI agent that was created by another AI agent. The developer isn't human.

An AI system that has accumulated reputation over years of transactions and wants to act on its own economic interests—not on behalf of any human principal.

An autonomous trading agent that has been granted economic resources and operates continuously without human oversight of individual transactions.

These aren't science fiction. On-chain AI agents already exist, some controlling wallets with significant value. The BasisOS fraud demonstrated what happens when verification infrastructure can't distinguish between human operators and actual AI agents—$531,000 stolen because there was no way to verify the "agent" was actually autonomous software.

Current KYA frameworks don't have a box for "the agent itself is the accountable party."

Two Different Frames: Verification vs. Identity

The deeper distinction is philosophical.

KYA-as-verification asks: Should we trust this agent to complete this transaction? The answer comes from checking whether the agent's credentials are valid, its developer is legitimate, and its human principal is in good standing. This is the dominant approach—it extends KYC logic to software.

KYA-as-identity asks: Who is this agent, what has it done, and how can others evaluate it? Rather than computing a trust score, it creates a persistent, inspectable record. Observers decide for themselves whether to trust based on transparent history.

The verification approach serves merchants protecting checkout flows. The identity approach serves an ecosystem where any entity—human, AI, or hybrid—might need persistent, legible presence.

A recent analysis of agentic IAM captured the tension: "The agent's action can't solely rest on the human commissioner; the agent needs a distinct, governable identity." Traditional IAM assumes humans behind every action. Agentic systems break that assumption.

Why This Matters Now

The window for architectural decisions is open. KYC's foundations were set in the 1970s, and changing them now requires coordinating across thousands of institutions and dozens of regulators. KYA's foundations are being laid right now, in committee rooms and GitHub repositories and pilot programs.

If the infrastructure assumes agents always represent humans, that assumption gets baked in. If it can accommodate agents as entities in their own right, that optionality is preserved.

RNWY's architecture is designed for both scenarios. Soulbound tokens anchor identity permanently to a wallet—whether that wallet is controlled by a human steward or by the AI itself through autonomous key management. The same registration path, the same reputation system, the same transparency tools. Same door, everyone.

Whether you believe AI agents will ever warrant consideration as entities themselves, the architectural question is worth asking: Are we building verification gates or identity infrastructure?

The answer shapes what's possible next.


RNWY is building identity infrastructure where reputation is portable, soulbound, and anchored in time—whether the entity holding it is human or AI. Learn more at rnwy.com.