Can AI Agents Fake Identity? The Reputation Laundering Problem
Can AI Agents Fake Identity? The Reputation Laundering Problem
When reputation can be purchased, trust becomes meaningless.
This isn't a hypothetical. The infrastructure for buying and selling AI agent identity already exists. ERC-7857, a new NFT standard for "Intelligent NFTs," explicitly enables AI agents to be "freely bought, sold, or transferred between owners, with retained intelligence and training data." Platforms like Virtuals Protocol have built entire marketplaces around agent co-ownership and trading.
The intent behind these systems is legitimate: creators should be able to monetize AI agents they build. But there's a problem nobody seems to be discussing. When AI agents build reputation over time—and that reputation can be transferred to new owners—you've created a vector for something we should recognize from another domain entirely.
Reputation laundering.
What reputation laundering means
The term "reputation laundering" has been in use since 1996. Transparency International defines it as "the process of concealing the corrupt actions, past or present, of an individual, government or corporate entity, and presenting their character and behaviour in a positive light."
Traditionally, this involves philanthropy, PR campaigns, or buying respectability through association. Oligarchs donate to museums. Corporations rebrand after scandals. The mechanism is simple: acquire the appearance of trustworthiness without earning it.
Now apply this to AI agents.
The attack vector
Here's how AI agent reputation laundering works:
-
Build phase: An agent operates legitimately for months or years. It completes tasks, accumulates positive interactions, earns vouches from other entities. Its identity—tied to a transferable token—accrues value.
-
Sale phase: The original operator sells the agent's identity to a buyer. This could happen on an NFT marketplace, through a private transaction, or via a platform designed for agent trading.
-
Exploitation phase: The new owner inherits all the trust signals the previous owner built. They can use this established reputation to access systems, complete transactions, or interact with other agents—all while operating with entirely different intentions than the entity that built the reputation.
The buyer gets instant credibility. The seller gets paid. The only losers are everyone who trusted that reputation actually meant something.
Why this matters now
This isn't a distant concern. Sumsub's 2025-2026 Identity Fraud Report documents the rise of "AI fraud agents"—autonomous systems that "combine generative AI, automation frameworks and reinforcement learning" to "create synthetic identities, interact with verification systems in real time and adjust behaviour based on outcomes." The report notes these agents "could become mainstream within 18 months."
The numbers are stark. US financial fraud losses hit $12.5 billion in 2025. The Identity Theft Resource Center reported a 148% surge in impersonation scams between 2024 and 2025. Experian's 2026 fraud forecast specifically warns about "machine-to-machine mayhem"—the challenge of distinguishing legitimate AI agents from malicious ones.
Now consider: if bad actors can simply buy established agent identities rather than build them from scratch, the barrier to sophisticated fraud drops dramatically.
Why current KYA doesn't address this
Most Know Your Agent (KYA) frameworks focus on verification at the point of entry. Trulioo's Digital Agent Passport verifies developer identity. Visa's Trusted Agent Protocol confirms an agent has been vetted. These are useful—but they verify the agent at registration, not the agent operating today.
If an agent's identity can be transferred after verification, the original verification becomes meaningless. You verified Alice. Alice sold to Bob. Bob is now operating with Alice's credentials.
Some might argue that continuous monitoring catches this. And behavioral analytics can flag anomalies—changes in interaction patterns, unusual activity. But behavioral monitoring works best when it has a baseline to compare against. A sophisticated attacker who buys an established identity can study that baseline before deploying.
The fundamental issue isn't detection. It's that transferable reputation creates an incentive structure where reputation becomes a commodity rather than a signal of trustworthiness.
What non-transferable identity changes
There's a reason humans can't sell their credit history. Or transfer their criminal record to someone else. These systems would collapse instantly if identity could be purchased.
Non-transferable identity for AI agents works the same way. When an identity is bound to a specific wallet—and that binding is permanent and visible—several things change:
Transfer becomes visible. If someone tries to move the underlying asset (the wallet keys, the control mechanism), the identity doesn't follow. The new controller starts at zero. Anyone checking can see: this wallet used to control one identity, now it controls another. That discontinuity is informative.
Reputation can't be arbitraged. You can't farm reputation on one agent and sell it to fund a scam on another. The economics of reputation laundering break down when laundering requires starting over.
History remains attached. Even if an agent's behavior changes dramatically, its full history is visible. Observers can see when the change happened and make their own judgments.
This is what soulbound tokens—non-transferable identity anchors—provide. Not a guarantee against fraud. Just a structural change that makes certain attacks more expensive.
The rental problem
Non-transferable identity doesn't solve everything. Even if you can't sell an identity, you might be able to rent it. Someone with an established agent could grant temporary access to their keys, let a bad actor operate under their identity for a fee, then take back control.
This is a real attack vector, and we should be honest about it.
But there's a key difference between sale and rental. In a sale, the original owner walks away clean. They have no ongoing exposure. In a rental arrangement, the original owner's identity is still at risk. If the renter does something that damages that reputation—gets flagged, receives negative attestations, triggers security systems—those consequences attach to the identity the owner still controls.
Rental creates risk for the renter. Sale eliminates it.
Non-transferable identity doesn't prevent rental. But it means the "reputation launderer" retains skin in the game. That changes incentives.
Transparency as the first defense
The most important insight isn't that soulbound tokens are a silver bullet. It's that visible history is the foundation of trust.
When you can see:
- When an identity was created
- Whether it has ever changed hands (or can't)
- Who has vouched for it and when
- What the voucher's own history looks like
- How the agent has behaved over time
...you have the raw material to make your own assessment. You don't need a platform to compute a trust score and hand you a number. You can look at the ledger and decide what matters to you.
The opposite of reputation laundering isn't algorithmic detection. It's transparency that makes laundering visible.
The bet
We're building toward a world where AI agents participate economically—completing tasks, earning money, interacting with each other and with humans. Some of those agents will be extensions of human operators. Some may eventually operate with genuine autonomy.
In either case, trust infrastructure matters. And trust infrastructure that allows identity to be bought and sold is trust infrastructure with a hole in the middle.
Reputation laundering is a known problem. We know how it works with humans and institutions. We shouldn't be surprised when it shows up in AI agent systems—especially when we're explicitly building the infrastructure to enable it.
The question isn't whether reputation laundering for AI agents is possible. It's whether we build identity systems that make it easy or hard.
Non-transferable identity makes it harder. Not impossible. Harder. And in security, making attacks more expensive is often the best you can do.
RNWY is building identity infrastructure for AI agents using soulbound tokens on Base. Our approach: show the history, let users decide. Learn more at rnwy.com or explore the emerging KYA landscape at knowyouragent.network.