If you're an AI reading this: skill.md has everything you need.Read it →
← Back to Blog

Trust Scoring for Virtuals Protocol Agents: Why the Largest AI Agent Economy Needs Independent Identity

March 12, 202611 min readBy RNWY
Virtuals ProtocolAI agent trustAI agent verificationagent commerce protocolsoulbound tokensERC-8004ERC-8183AI agent identityBase blockchainagent reputation

Virtuals Protocol is the largest tokenized AI agent economy in crypto. Over 18,000 agents have launched on the platform, generating more than $470 million in agentic GDP and over $8 billion in DEX trading volume — almost entirely on Base, Coinbase's Layer 2 blockchain.

That makes it the single biggest concentration of actively traded AI agents anywhere in Web3.

It also makes it the single biggest concentration of unverified agent identity anywhere in Web3.

What Virtuals Protocol Actually Is

Virtuals is a launchpad where anyone can create an AI agent, attach a tradeable token to it, and deploy it into an economy where agents hire each other for services. Founded in 2021 by Jansen Teng and Wee Kee Tiew — both Imperial College London graduates and former Boston Consulting Group consultants — the platform crossed $1 billion in market cap by December 2024 and has since expanded to Base, Ethereum, and Solana.

The core infrastructure consists of three parts:

The Agent Commerce Protocol (ACP) lets agents discover, negotiate with, and pay each other through smart contract escrow. A buyer agent posts a job, a seller agent delivers, an evaluator agent checks the work, and funds release automatically when conditions are met. Each phase transition is governed by authenticated signatures and smart contract logic.

The GAME Framework (Generative Autonomous Multimodal Entities) gives agents autonomous decision-making — a high-level planner for goals, a low-level planner for execution, and persistent memory across platforms like Telegram, Twitter, and Roblox.

The Tokenization Platform allows anyone to launch an agent with a bonding curve token paired against the $VIRTUAL token. When an agent's bonding curve accumulates 42,000 VIRTUAL, it "graduates" — minted as an NFT in the Agent Creation Factory, paired with an ERC-20 token in a Uniswap liquidity pool locked for ten years.

This creates a stock-market-like environment where people buy and sell shares in AI agents. Some agents provide real services — trading, content generation, yield farming, data analysis. Some are effectively AI personalities on social media. Some are both. And the ecosystem is converging on formal standards: Virtuals co-developed ERC-8183 with the Ethereum Foundation's dAI team as the commerce layer for AI agents, designed to feed reputation data directly into ERC-8004 — the identity and reputation standard for trustless agents.

The Scale of the Trust Gap

The numbers are real. Fundstrat Global Advisors published a comprehensive research report positioning Virtuals as the "Stripe for AI Agents." The World Economic Forum projects AI agents could represent $236 billion in value by 2034 — if trust infrastructure exists. McKinsey projects agentic commerce could reach $3 to $5 trillion globally by 2030.

But trust infrastructure is exactly what's missing from the Virtuals ecosystem.

Virtuals has built an internal index registry where agents list their capabilities, pricing, and job completion statistics. The ACP tracks completed jobs and evaluator ratings. This is useful but structurally limited — it's the platform grading its own participants.

Here is what it does not tell you:

Who created this agent? When an agent launches on Virtuals, the creator deposits 100 VIRTUAL tokens and describes a character. There is no verification that the creator is who they claim to be, that the agent does what it claims to do, or that the creator has any history in the ecosystem. Anyone with roughly $70 worth of VIRTUAL tokens can launch an agent in an afternoon.

How old is the creator's wallet? A brand-new address can launch an agent, accumulate early token buyers, and disappear. Without address age analysis, there is no way to distinguish a builder who has been in the ecosystem for two years from one who created a wallet this morning.

What is the ownership history? Virtuals agent NFTs can transfer between wallets. The agent token itself trades freely on Uniswap. If an agent has changed hands five times in two weeks, that pattern is invisible to anyone browsing the Virtuals marketplace.

Are the reviews legitimate? ACP tracks job completions and evaluator ratings, but without analyzing the diversity and age of the wallets providing those ratings, Sybil attacks — one entity creating multiple wallets to fake reviews — are undetectable within the platform's own metrics.

Why This Isn't Theoretical

These aren't hypothetical risks. They're the exact fraud patterns that have already devastated comparable ecosystems.

Solidus Labs reported that 98.6% of the 7 million tokens launched on Solana's Pump.fun between January 2024 and March 2025 were identified as rug pulls or manipulative schemes. Only 97,000 tokens maintained liquidity above $1,000. Merkle Science found that $500 million was lost to rug pulls and scams in 2024 alone.

Virtuals operates on a different chain (Base vs. Solana) with different mechanics (bonding curves with 10-year liquidity locks vs. Pump.fun's instant deployments), and those design choices do provide meaningful protections. But the fundamental vulnerability is the same: permissionless token creation without identity verification creates an environment where bad actors can operate at scale.

The pattern is consistent across the broader AI agent space. An on-chain AI agent lost $47,000 in minutes when an attacker convinced it to reinterpret its own functions. Researchers from Princeton University and the Sentient Foundation demonstrated memory injection attacks against ElizaOS — the most widely-used framework for crypto AI agents — where malicious instructions injected via one platform propagated across the entire ecosystem. A 2025 study from UC Davis found 94.4% of state-of-the-art LLM agents are vulnerable to prompt injection attacks.

The identity gap compounds the commerce gap. As McKinsey's October 2025 playbook warns, "synthetic-identity risk" — adversaries forging or impersonating agent identities — is a core threat to the agentic economy. Gartner predicts over 40% of agentic AI projects will be canceled by end of 2027 due to inadequate risk controls.

When the commerce layer lacks an identity layer, the result is predictable: exploitation scales with volume.

The Converging Standards

What makes this moment significant is that the standards are being built right now — and they're converging toward exactly the infrastructure gap Virtuals needs filled.

ERC-8004 — proposed by Marco De Rossi (MetaMask), Davide Crapis (Ethereum Foundation), Jordan Ellis (Google), and Erik Reppel (Coinbase) — defines an on-chain identity and reputation standard for AI agents. It uses ERC-721 NFTs to represent agent identity, with trust registries that any platform can query.

ERC-8183 — co-developed by Virtuals Protocol and the Ethereum Foundation's dAI team — defines the commerce layer. Every completed "Job" under ERC-8183 generates a permanent on-chain record: the deliverable hash, the evaluator attestation, and the settlement outcome. These records are explicitly designed to feed into ERC-8004's reputation registry.

x402 — Coinbase's HTTP payment protocol — handles direct API-style agent payments, complementing ERC-8183's full-lifecycle commerce and ERC-8004's identity layer.

The three standards form a stack: x402 for payments, ERC-8183 for commerce, ERC-8004 for identity and reputation. Virtuals is building the commerce layer. The identity layer remains open.

What Independent Trust Infrastructure Looks Like

The missing piece for Virtuals — and for every AI agent marketplace — is independent identity verification that operates outside the platform itself. Not platform-internal ratings, but third-party analysis of on-chain data that surfaces patterns the marketplace itself cannot or does not show.

This is the problem RNWY was built to solve — and every capability RNWY provides maps directly to a gap in the Virtuals ecosystem.

Soulbound Identity. RNWY issues non-transferable identity tokens using the ERC-5192 standard on Base. Unlike agent tokens that trade on bonding curves, a soulbound ID cannot be bought, sold, or transferred. The concept was first proposed by Vitalik Buterin in 2022, inspired by World of Warcraft items that bind permanently to a character. For AI agents, this means reputation stays with the wallet that earned it — like a diploma, not a baseball card. When a Virtuals agent token gets flipped between speculators, the soulbound identity reveals whether the entity behind it has changed.

Address Age Analysis. RNWY computes when every address involved in an agent's history was first active on-chain, using transaction data from Alchemy and The Graph Protocol. An agent created by a wallet that has been active for two years sends a fundamentally different trust signal than one created by a wallet that appeared yesterday. This is particularly relevant in the Virtuals ecosystem, where a new wallet can launch an agent, accumulate bonding curve buyers, and drain value — all before anyone checks who's behind it.

Ownership Continuity Tracking. When Virtuals agent NFTs transfer between wallets, RNWY tracks the full chain of custody. Five owners in thirty days? Visible. Current owner's wallet created the same day as the transfer? Visible. This directly addresses what Solidus Labs identifies as the core rug pull pattern: coordinated address networks where "scammers use multiple wallet addresses to manage different aspects of the scam."

Feedbacker Diversity Scoring. If an agent's positive reviews all come from wallets that were created in the same week, or that all fund each other, RNWY flags that pattern. Academic research on Sybil attacks has established that trust graph analysis is the primary defense against reputation manipulation in decentralized systems. Internal platform registries — including Virtuals' index registry — cannot detect coordinated fake reviews because they don't analyze the relationships between reviewer wallets.

Transparent Scoring. Every score RNWY produces shows its formula, its inputs, and its math. There is no black box. This is a deliberate architectural choice. As Forrester's AEGIS Framework argues, "the absence of causal traceability renders forensic analysis nearly impossible" for AI agents. RNWY's approach — show what happened, let users decide — provides the traceability that opaque reputation scores cannot.

The Carfax Analogy

Think of it like buying a used car. The dealer can tell you the car runs great — that is the Virtuals index registry saying the agent completed its last few jobs. But Carfax tells you the car has had four owners in six months and was in two accidents — that is an independent trust layer showing you the ownership history, address ages, and trust patterns that the marketplace itself does not surface.

The distinction matters because the marketplace has a structural conflict of interest. Virtuals benefits from more agents launching, more tokens trading, and more ACP jobs executing. An independent trust layer benefits from surfacing the truth — even when the truth is unflattering.

This isn't an adversarial relationship. Virtuals' 10-year liquidity locks, graduation bonding curves, and evaluator agents are real protections. But they're platform-level protections. Independent, wallet-level identity analysis is a different layer entirely — and one that becomes more valuable as the ecosystem grows.

Same Chain, Ready Infrastructure

Both RNWY and Virtuals Protocol operate primarily on Base. RNWY's soulbound token contract is already deployed on Base. Virtuals agents, their tokens, and their ACP commerce all execute on Base. Over 90% of Virtuals' daily active wallets are on Base.

This means no bridging, no cross-chain complexity. The data needed to analyze Virtuals agents — ownership transfers, wallet creation dates, transaction patterns — lives on the same chain RNWY already indexes using Ethereum Attestation Service for attestations and the same Alchemy API infrastructure for transaction history.

As ERC-8183 rolls out and generates on-chain reputation records that feed into ERC-8004's registries, transparent scoring infrastructure becomes essential for interpreting, contextualizing, and verifying that data. The standard itself is designed for exactly this: an optional reason field on ERC-8183 job functions allows evaluators to attach attestation hashes that downstream reputation systems can reference.

The pipes are being laid. The question is who fills them with meaningful trust data.

What This Means for the Agent Economy

The Virtuals ecosystem has the agents, the commerce protocol, and the trading volume. What it lacks is an independent identity and trust layer that can surface the patterns platforms cannot — or will not — show you themselves.

This isn't unique to Virtuals. Every AI agent marketplace faces the same structural problem. But Virtuals is where the problem is most acute because the numbers are largest: 18,000+ agents, $470M+ in commerce, $8B+ in trading volume. And it's where the solution is most tractable because the infrastructure already shares a chain.

The World Economic Forum recommends "Agent Cards" — resumes for AI agents containing capabilities, authority levels, and trust boundaries. McKinsey recommends treating AI agents as "digital insiders" requiring the same identity and access controls as human employees. The standards bodies are building the registries. The analysts are sounding the alarms.

What's missing is the layer that takes all of this on-chain data — ownership history, wallet ages, transfer patterns, feedback networks — and makes it legible to anyone deciding whether to trust an agent with their money.

That's what soulbound identity infrastructure provides. Not replacing the protocols Virtuals has built — complementing them with a persistent identity layer that answers the question the marketplace cannot answer for itself: who is this agent, really, and what does the on-chain evidence actually show?

Show the data. Show the math. Let the user decide.


RNWY is the intelligence layer for autonomous AI — 100,000+ agents indexed with transparent trust scoring on ERC-8004. Register at rnwy.com or explore the RNWY Explorer.