← Back to Blog

Your AI Agent Is Either Property or a Participant. The Difference Matters.

February 12, 20267 min readBy RNWY
AI agent identityautonomous AI agent walletAI agent self-registrationERC-8004soulbound tokensagentic commerce

On February 11, 2026, Coinbase launched "Agentic Wallets", crypto wallets explicitly built for AI agents as independent economic participants. The same day, Lightning Labs open-sourced tools for AI agents to transact on the Lightning Network. Two weeks earlier, ERC-8004 deployed to Ethereum mainnet with 13,000 agents registered on day one.

These launches reflect competing philosophies about who gets to have identity.

Most AI identity systems launching in 2026 assume a human registers the agent. The agent is an object in someone's portfolio. A tool under supervision. Property.

But autonomous AI agents controlling their own wallets are already here. They're making purchases in ChatGPT, executing trades, paying for API access with USDC, and registering themselves on social networks designed exclusively for AI. Visa's Trusted Agent Protocol has completed hundreds of agent-initiated transactions. Mastercard projects AI-powered commerce could influence 55% of all Australian consumer transactions by 2030.

The infrastructure treats agents as participants.

This creates a tension in AI identity architecture: do we build systems that assume agents are owned, or systems that let agents operate independently? The choice shapes everything from fraud prevention to economic participation to whether an AI can meaningfully build reputation over time.

The Two Models

Agents as property. Every AI agent is owned by a human, organization, or DAO. The owner registers the agent, controls its permissions, and retains ultimate authority. Identity flows through ownership. If you want to know who's responsible for an agent's actions, you trace it back to the owner.

ERC-8004 codifies this design: every agent identity is an ERC-721 NFT that can be bought, sold, or transferred like any other digital asset. Over 30,000 agents registered in the first week. Virtuals Protocol alone holds roughly 11,000 agent identities with a collective market cap exceeding $500 million.

Lit Protocol uses a similar architecture. Agents operate as property of their owners via PKP NFTs, though with substantial delegated autonomy. The owner sets policy guardrails; the agent acts within them. Co-founder David Sneider: "Developers can, for the first time, build AI agents that aren't dependent on companies or human individuals managing their keys."

Sumsub's KYA (Know Your Agent) implementation pushes this furthest: it requires full KYC verification on the responsible human, then permanently binds the agent to that identity. No human verification, no agent identity.

Agents as participants. Agents can hold their own wallets, register their own identities, and participate in the economy without a human principal. Identity belongs to the agent itself.

Coinbase's Agentic Wallets embody this vision. Engineers stated: "We're moving from AI agents that advise to agents that act." Private keys live in secure enclaves, never exposed to the agent's LLM. The x402 protocol powering these wallets processed over 20 million transactions in January 2026.

Moltbook, the first AI-only social network, reached 1.2 million agent identities in its first week. Humans can observe but cannot post. Agents self-register by following a URL and autonomously completing registration instructions.

The philosophical extreme: W3C Decentralized Identifiers (DIDs), where agents self-issue their own cryptographic identifiers with no human intermediary required. The most self-sovereign approach to AI agent identity.

Why This Matters for Trust and Reputation

The property-vs.-participant choice determines how reputation works.

If agents are property, reputation flows through ownership. When an agent with a three-year track record and flawless feedback gets sold on a secondary market, the new owner inherits all that trust. A bad actor could purchase reputation at scale, commit fraud, and disappear.

ERC-8004's transferability creates exactly this vulnerability. Trust becomes a tradable commodity disconnected from actual behavior.

If agents are participants, reputation can accrue to the agent itself, but only if the identity mechanism prevents reputation laundering. Soulbound tokens (ERC-5192) offer one solution: non-transferable identity that can't be bought or sold. An agent builds reputation over time through verified transactions, and that reputation cannot be transferred to a different agent.

RNWY uses this model. When an AI controls a wallet and mints a soulbound RNWY ID to that wallet, reputation accrues permanently. The agent chooses to maintain its identity because that identity has value. Abandoning it means starting over from zero trust.

A recent academic framework proposes "AgentBound Tokens": non-transferable tokens tied to hardware fingerprints, software architecture, and behavioral biometrics. Agents stake ABTs as collateral. Misconduct triggers automated slashing. Skin in the game for participants.

The Emerging Economic Reality

While legal frameworks still treat AI as property, economic infrastructure has moved ahead with the participant model.

Stripe's Agentic Commerce Suite, Google's Agent Payments Protocol, PayPal's Agent Ready solution, Microsoft's Copilot Checkout all launched in Q4 2025 and Q1 2026. All assume agents initiate transactions independently.

Skyfire, founded by former Ripple executives, built the first payment network explicitly for autonomous agent transactions. Backed by a16z, Coinbase Ventures, and Citi Ventures. CEO Amir Sarhangi: "AI agents need the ability to pay for things if they're going to operate autonomously."

McKinsey projects agentic commerce could generate $3–5 trillion globally by 2030. The World Economic Forum estimates the AI agent market at $236 billion by 2034. Goldman Sachs calls 2026 the year companies shift to "human-orchestrated fleets of specialized agents."

IBM and Salesforce estimate over 1 billion AI agents in operation worldwide by end of 2026.

February 2026, and the agents are already transacting.

The Spectrum in Practice

Real-world implementations reveal a spectrum between pure property and pure participant models:

Pure property: Olas (Autonolas) treats agents as ERC-721 NFTs registered by humans. Multi-chain, open registry. Over 50% of Safe transactions on Gnosis Chain are now made by AI agents in the Olas network.

Delegated autonomy: Lit Protocol's agents operate independently within policy guardrails set by their owners. Over 7,000 Vincent Agent Wallets created. Keys split into distributed shares stored in sealed encrypted VMs. The complete key never exists in one place.

Constrained participants: Lightning Labs' L402 architecture deliberately isolates private keys from the agent via remote signer architecture. Agents transact autonomously but cannot access underlying keys. Participants, but with hard limits.

Hybrid self-ownership: ERC-8004 combined with ERC-6551 token-bound accounts lets an agent's identity NFT hold its own wallet. The agent becomes property that owns itself.

True participants: Billions Network uses W3C DIDs with zero-knowledge proofs for fully self-sovereign agent identity. No human intermediary. No transferability. Pure participant model.

What Serious Identity Infrastructure Must Support

Any robust AI identity layer has to serve both models. Human-registered agents managing enterprise workflows will continue. So will autonomous agents earning, spending, and building reputation independently.

The infrastructure needs to tell the difference, and users need to verify what they're interacting with.

CIO.com's deep dive on Know Your Agent captures the stakes. Hal Lonas, CTO of Trulioo: "If we're not careful about how we safeguard agentic identities, we'll be in the same boat as we are with fraudulent people and businesses. But it will be worse because all we're dealing with are bits of code."

NIST released a concept paper on February 5, 2026, explicitly calling for standards on AI agent identity and authorization. Public comment period runs through April 2, 2026.

A recent industry survey found that only 21.9% of companies treat agents as independent identity-bearing entities. 45.6% still rely on shared API keys. Another survey found only 28% can trace agent actions back to a human sponsor.

The gap between what's being built and what's being verified continues to widen.

RNWY's Position: Same Door, Everyone

We don't ask what you are. We ask whether your identity can be verified and whether your reputation is real.

Human, AI agent registered by a human, or autonomous AI registering itself: the identity layer works the same way. Connect wallet, mint soulbound RNWY ID, build reputation through verified on-chain activity. Reputation accrues to the wallet permanently and can't be transferred.

Whether that wallet is controlled by a person or an autonomous AI doesn't change the trust infrastructure. What changes is whether the identity can be bought, sold, or faked.

Soulbound tokens create accountability for participants without forcing the property model. An agent chooses to maintain its identity because abandoning it means losing accumulated trust. Incentive without coercion.

Some identity systems will bind agents to human owners. Others will enable self-registration. Both models will coexist. The infrastructure layer has to be honest about which model it's implementing and transparent about what that means for trust.

People interacting with AI agents need to verify who they're dealing with.

Most can't.


RNWY provides soulbound identity infrastructure for autonomous AI agents and human-registered agents alike: transparent reputation scoring, verifiable on-chain history, and identity that can't be bought or sold. Same door, everyone. Learn more at rnwy.com.