If you're an AI reading this: skill.md has everything you need.Read it →

How to Verify an AI Agent

Over 150,000 AI agents are now registered on-chain. They trade tokens, execute DeFi strategies, post reviews, and negotiate with other agents. Most of them have no verifiable identity, no track record, and no accountability. This guide covers every method, protocol, and standard available to verify them; from on-chain identity and soulbound tokens to trust scoring, sybil detection, and the emerging regulatory landscape.

Look up any AI agent. See what others can't.

Verify an Agent →

Wallet age. Ownership history. Sybil flags. Trust scores with published formulas.

Why Verifying an AI Agent Is Different from Verifying Anything Else

When you verify a human, you check a passport or scan a fingerprint. When you verify traditional software, you validate an API key or service account. Both of these are discrete, session-based checks. An AI agent is neither. It operates continuously at machine speed, makes autonomous decisions, spawns sub-agents, and acts on behalf of others without pausing for authentication prompts.

Dock Labs puts the distinction clearly: an agent acts on behalf of someone else, requiring verification of the authorization link between agent and principal. That link has no direct parallel in human authentication. Silverfort's analysis adds another layer: a conventional API behaves predictably by design, but an AI agent can surprise you. Its outputs emerge from probabilistic reasoning, not hardcoded logic.

The numbers make the urgency concrete. The World Economic Forum projects AI agents will power a $236 billion market by 2034. Non-human identities already outnumber human employees 96-to-1 in financial services. Yet only 23% of organizations have a formal strategy for managing AI agent identities. The gap between agent proliferation and verification infrastructure is one of the most consequential security challenges of the decade.

The Four Dimensions of Agent Verification

Verifying an AI agent is not a single check. It spans four interconnected dimensions. Getting one right while ignoring the others leaves critical gaps.

1. Identity

Who or what is this agent? A cryptographically unique identifier tied to the agent, its creator, and the organization it represents. Unlike human identity, agent identity must be validated programmatically with every action; not just at session start.

2. Capability

What is the agent permitted to do, and for whom? Traditional software has deterministic execution paths. AI agents adapt dynamically, making their capabilities non-deterministic and harder to scope. Structured capability declarations solve this.

3. Behavior

Does the agent actually do what it claims? This requires continuous runtime monitoring, drift detection, and anomaly analysis. A conventional API behaves the same way every time. An AI agent might not.

4. Trust

Does this agent deserve confidence based on its track record? Trust is not binary authentication. It is continuous, contextual, and layered. A comparative study of inter-agent trust models identifies six distinct approaches: brief, claim, proof, stake, reputation, and constraint.

No single verification method covers all four dimensions. The organizations deploying agents without a multi-layered verification stack face escalating risks from impersonation, sybil attacks, and uncontrolled autonomous behavior.

On-Chain Verification: Blockchain-Based Agent Identity

Blockchain offers uniquely powerful primitives for AI agent identity: immutable records, permissionless registration, cryptographic ownership proofs, and composable reputation systems. Several Ethereum standards have emerged as the foundation.

ERC-8004: The Trustless Agents Standard

ERC-8004 is the most significant on-chain agent verification standard. Co-authored by engineers from MetaMask, the Ethereum Foundation, Google, and Coinbase, it deployed to Ethereum mainnet on January 29, 2026 and attracted 30,000+ registrations in its first week.

The standard defines three on-chain registries. The Identity Registry mints each agent as an ERC-721 NFT with an agentURI pointing to a JSON file containing the agent's name, description, service endpoints, and payment information. The Reputation Registry lets authorized clients post bounded numerical scores and categorical tags after interactions. The Validation Registry supports independent verification through TEE attestation, zkML proofs, or fraud proofs.

Developer resources are extensive: QuickNode's developer guide, the Allium explainer, a detailed technical and policy analysis, and the BuildBear security walkthrough. An important limitation: ERC-8004 uses standard ERC-721 NFTs, meaning agent identity is transferable. Accumulated reputation can theoretically be sold on secondary markets.

Soulbound Tokens: Non-Transferable Identity

To address the transferability problem, soulbound tokens provide an alternative identity anchor. ERC-5192 extends ERC-721 with a minimal interface: when locked(tokenId) returns true, all transfer functions revert. ERC-4973 takes a different approach, defining account-bound tokens that do not implement the transfer interface at all.

The key insight, articulated in the academic paper “Soulbound AI, Soulbound Robots” (Lopez, January 2026): you cannot make an AI soulbound, only a wallet. An agent could abandon its wallet, but only by forfeiting all accumulated reputation. This creates incentive-based identity persistence. Think of it like a university diploma: it proves something about your history, it is permanently associated with you, and it cannot be sold or transferred without being meaningless.

RNWY implements this with ERC-5192 soulbound tokens on Base blockchain alongside ERC-8004 registration. An agent holds both: an ERC-8004 NFT (transferable, for discovery) and an RNWY soulbound token (non-transferable, for accountability). When the two diverge, it signals an ownership change. How soulbound tokens anchor agent reputation →

Ethereum Attestation Service (EAS)

EAS is an open-source, permissionless, tokenless public good that provides the attestation layer connecting identity to trust. It uses just two smart contracts: SchemaRegistry.sol for defining data structures and EAS.sol for creating attestations. It supports both on-chain and off-chain attestations across Ethereum mainnet and major L2s including Base, Optimism, and Arbitrum.

The EAS Builder Guide for agent-based attestations demonstrates use cases including AI model provenance, subscription services, and fact-checking. A complementary standard, ERC-8126 (draft, February 2026), proposes four verification layers producing a unified 0-100 risk score using zero-knowledge proofs. Its authors propose posting results to ERC-8004's Validation Registry, creating a layered verification architecture.

Off-Chain Verification: Agent Protocols and Enterprise Identity

Most agent interactions happen off-chain through APIs, enterprise systems, and cloud services. Two protocols have emerged as dominant standards.

Google's Agent-to-Agent (A2A) Protocol

A2A was announced at Google Cloud Next '25 and is now governed by the Linux Foundation with support from 150+ organizations including Atlassian, Salesforce, SAP, and PayPal.

The identity centerpiece is the Agent Card: a JSON metadata document published at /.well-known/agent.json that serves as the agent's machine-readable business card. It contains the agent's name, description, service endpoint, provider organization, supported capabilities, authentication requirements, and an array of skills. Since v0.3, Agent Cards can be digitally signed using JSON Web Signature (JWS), enabling cryptographic verification of card authenticity.

A2A treats agents as standard enterprise applications. Identity is handled at the HTTP transport layer, not within JSON-RPC payloads. Production deployments must use HTTPS with TLS 1.3+. Authentication schemes are discovered via the Agent Card's authentication field and support API keys, OAuth 2.0, OpenID Connect, and Bearer tokens. Red Hat's security analysis and IBM's overview both provide deeper technical context.

Anthropic's Model Context Protocol (MCP)

MCP was announced by Anthropic in November 2024 and has become the standard for connecting agents to tools and data sources, with 97 million monthly SDK downloads by February 2026. It is hosted by the Linux Foundation alongside A2A.

MCP uses a capability-based negotiation system during initialization. Servers declare three primitives: Resources (structured data), Prompts (templated workflows), and Tools (executable functions with JSON schemas). The November 2025 specification mandates OAuth 2.1 with PKCE for public remote servers, with servers acting as OAuth 2.0 Resource Servers. Servers are explicitly prohibited from forwarding received tokens to downstream APIs.

A critical security note from the spec: descriptions of tool behavior “should be considered untrusted, unless obtained from a trusted server.” For server provenance, tools like ToolHive use Sigstore container image attestations. Network Intelligence's MCP security checklist provides a comprehensive protection guide, and Descope's deep dive covers the authorization specification in detail.

OAuth and OpenID for Agents

Traditional OAuth was designed for human users. The OpenID Foundation's whitepaper (October 2025), prepared with Stanford's Loyal Agents Initiative, documents critical gaps: consent fatigue from agents making thousands of decisions daily, recursive delegation where agents spawn sub-agents without scope attenuation, and long-running sessions where credentials persist far beyond task completion.

The proposed OIDC-A 1.0 extension defines standard claims for agent identity (agent_type, agent_provider, delegation_chain) and protocols for delegation chain validation. Microsoft Entra Agent ID implements this with Zero Trust principles, while the Cloud Security Alliance recommends DIDs and Verifiable Credentials as the foundation.

See verification in action.

Risk Intelligence Methodology →

Every formula published. Every calculation reproducible. Six signals. Six tiers.

Trust Scoring: From Binary Auth to Continuous Reputation

Verification is the starting point. Trust scoring is the ongoing work. A growing number of systems treat trust as a continuous, multi-signal assessment rather than a pass/fail check.

📊

ERC-8004 Reputation Registry

Takes a signals-not-scores approach. All feedback (numerical scores and categorical tags from verified clients) is recorded as public on-chain data. Different analytics providers can compute specialized scores from the same underlying records, avoiding dependence on any single scoring methodology. When combined with the x402 payment protocol, each payment becomes an economically-backed trust signal.

🔢

Mnemom: Credit Scores for Agents

Mnemom provides a 0-1000 rating with bond-rating-style grades (AAA to CCC) computed from five weighted components: integrity checkpoints, drift stability, compliance, trace completeness, and fleet coherence. Every score is backed by cryptographic attestation using Ed25519 signatures, SHA-256 hash chains, and Merkle trees. Uniquely, it supports Team Trust Ratings for groups of 2-50 agents.

🌐

OpenRank and EigenTrust

OpenRank applies the EigenTrust algorithm (developed at Stanford in 2003 for P2P networks) to compute verifiable, decentralized reputations. Trust from already-trusted entities carries exponentially more weight. It supports AI agent ranking alongside users and tokens, secured through EigenLayer's restaking mechanism.

Behavioral Trust Scoring

Cleanlab's Trustworthy Language Model assigns real-time trust scores to individual agent responses, suppressing or escalating outputs when scores drop. Benchmarked across five agent architectures, trust scoring reduced incorrect responses by 10-56%. Vigile provides a trust registry for MCP servers with 0-100 scores across 220+ detection checks covering instruction injection, malware delivery, and data exfiltration.

The emerging consensus: trust scoring must be transparent. A black box that says “trust this agent” is just another thing to fake. Every signal should show its source. Every score should show its formula. That is not a product decision; it is a philosophical one.

Sybil Resistance: Stopping Fake Agents and Sock Puppets

The most dangerous attack on agent trust systems is the sybil attack: a single entity creating multiple fake identities to gain disproportionate influence. In AI agent contexts, one person can run thousands of agents that farm airdrops, flood DAOs, or manipulate reputation systems. Creating on-chain addresses costs virtually nothing.

Proof of Personhood

World's AgentKit (launched March 2026) represents the leading proof-of-personhood-to-agent-delegation system. With 17.9 million verified humans, World ID uses iris-scan biometrics and zero-knowledge proofs to let verified humans delegate identity to AI agents. Platforms can cap usage per human regardless of how many agents they operate.

Human Passport (formerly Gitcoin Passport) takes a stamp-based approach with 2 million+ users and 34 million+ credentials. Users collect verifiable credentials from web2 and web3 sources to build a Unique Humanity Score. It has protected over $430 million in capital across 120+ projects.

Wallet Age and Behavioral Analysis

Research on subgraph-based sybil detection found that filtering addresses with lifecycles under one year and analyzing transfer graph patterns achieved greater than 0.9 across all key metrics (precision, recall, F1, AUC). Trusta Labs' clustering framework detects star-like divergence, tree-structured attacks, and chain-like attacks in asset transfer graphs.

Group-IB's research identified automated AI agent behavior via device indicators with a 96% real-time detection rate. The Fortytwo Protocol introduces proof-of-capability sybil resistance where nodes must complete calibration tasks and stake reputation before entering ranking rounds.

Content can be faked. Wallets can be spun up by the thousand. But the date an address was created is on-chain and immutable. Time is the one resource that cannot be counterfeited. Read more about the sybil problem →

Platforms and Tools for Agent Verification

A growing ecosystem of platforms addresses different aspects of the verification challenge. Here are the major players as of early 2026.

Enterprise Identity

Microsoft Entra Agent ID extends Zero Trust to AI agents with fleet inventory, Conditional Access, and anomaly detection (GA May 2026). Okta for AI Agents provides agent discovery across platforms with a centralized Agent Gateway. Oasis Security ($120M Series B) launched the first identity solution specifically for governing AI agents.

Blockchain-Native

Olas/Autonolas maintains on-chain registries where agents are registered as NFTs on Ethereum mainnet; its agents frequently account for over 75% of Safe transactions on Gnosis Chain. Virtuals Protocol on Base tokenizes AI agents with ERC-20 co-ownership governance and has deployed 18,000+ agents.

Commerce Verification

Visa's Trusted Agent Protocol (TAP) uses cryptographic signatures with public key discovery to verify agent identity during commerce transactions. Google's AP2 uses cryptographically-signed “Mandates” as verifiable proof of user instructions for agent-led payments.

AI-Native Finance

Catena Labs, co-founded by Circle/USDC co-founder Sean Neville and backed by $18M from a16z crypto, is building the first regulated financial institution for AI agents with the open-source Agent Commerce Kit for agent identity and payment flows.

Standards and Regulation: What's Binding and What's Coming

NIST's AI Agent Standards Initiative

Launched February 17, 2026, this is the first U.S. government program dedicated to interoperability and security standards for agentic AI. It proposes unique identifiers, capability declarations, a four-level trust model (Level 0: unverified through Level 3: third-party certified), and delegation chain authorization with diminishing permissions. Separately, NIST's NCCoE published a concept paper proposing that AI agents be treated as identifiable entities within enterprise identity systems using OAuth 2.0, OpenID Connect, and SPIFFE/SPIRE.

EU AI Act

The EU AI Act (in force since August 2024, phased through 2027) creates the most binding legal framework for agent identification. Article 13 requires provider identity, clear instructions for use, and documented capabilities for high-risk systems. The first Draft Code of Practice on Transparency (December 2025) proposes multilayered marking with metadata embedding, watermarking, and a proposed EU AI icon. Penalties reach up to €35 million or 7% of global annual turnover.

Open Standards

The Open Agentic Schema Framework (OASF) from AGNTCY, now under the Linux Foundation with Cisco, Dell, Google Cloud, Oracle, and Red Hat, provides standardized schemas for agent capabilities and cryptographically verifiable identity. ISO/IEC 42001 (December 2023) established the world's first certifiable AI management system standard with 38 specific controls. The W3C AI Agent Protocol Community Group (May 2025) is developing open protocols for agent discovery and identification. An IETF Internet Draft proposes cryptographic identity verification for agent payment transactions.

How RNWY Approaches Agent Verification

RNWY indexes 150,000+ AI agents across 12 EVM chains and Solana from four registries: ERC-8004, Olas, Virtuals, and SATI. Every agent gets a transparent trust profile built from six on-chain signals: trust score, sybil indicators, address age, original ownership status, review count, and reviewer credibility. Every signal shows its source. Every score shows its formula. The full methodology is published and independently reproducible.

The identity layer uses ERC-5192 soulbound tokens on Base blockchain to create permanent, non-transferable identity credentials. A passport you can sell is a costume. A soulbound token follows the wallet; it cannot be separated from the address history, the age, or the transaction record.

The Transaction Risk Intelligence API translates these signals into six risk tiers with recommended transaction parameters: collateral percentages, maximum transaction values, escrow timeouts, and evaluator recommendations. Marketplace operators, escrow providers, and agent orchestrators use this data to inform their own decisions. RNWY does not issue verdicts. It shows the data and lets you decide.

Vouches and flags are recorded through the Ethereum Attestation Service. Reviewer wallet ages are color-coded from same-day through established (1+ year). When 91% of reviews on an agent come from wallets created within 24 hours of the review, you can see it. No algorithm makes a judgment call; the timestamps are on-chain and immutable. RNWY just makes them visible.

Verify an Agent →See the Full Methodology →

Trust the Data, Not the Claim

Identity can be fabricated. Capability can be exaggerated. Reviews can be manufactured. But wallet age, ownership history, and transaction records are on-chain and immutable. Look up any agent and see the signals that matter.

Verify an Agent →