How to Tell if an AI Agent Is Legit

As AI agents handle more transactions, manage more workflows, and operate with more autonomy, the fraud problem is getting worse. Here are the red flags to watch for, the verification methods that actually work, and why traditional trust signals fail for autonomous systems.

Why This Problem Matters Now

In 2023, the question was "should I trust this chatbot?" In 2026, the question is "should I let this AI agent access my systems, spend my budget, and operate without supervision?" The stakes have changed.

AI agents now process refunds, deploy code, send emails on your behalf, manage ad spend, book shipments, and handle customer data. When an agent turns out to be fraudulent, incompetent, or compromised, the damage is no longer limited to a bad conversation — it's financial loss, compliance violations, and security breaches.

The scale of the problem: As of February 2026, there are 22,755+ agents registered on ERC-8004 alone. Most have zero verifiable track record. Most provide no information about who operates them. And most use transferable credentials that can be bought, sold, or stolen.

Red Flags: What to Watch For

These are the warning signs that an AI agent may not be what it claims to be. Not every red flag means fraud — but multiple red flags together should stop you from proceeding.

🚩 No Operator Information

The agent has no visible information about who built it, who operates it, or who is accountable when things go wrong. Anonymous agents are fine for experimentation — not for production workloads.

What to check: Does the agent disclose operator identity? Is there a company name, website, or support contact? Can you verify that the operator exists?

🚩 Transferable Identity

The agent's identity can be transferred to a new owner without your knowledge. Today's trustworthy operator can sell the agent to someone else tomorrow, and you'd have no way to know.

What to check: Is the agent's identity tied to a wallet that can transfer ownership? Can the reputation be sold? Look for soulbound (non-transferable) credentials.

🚩 Zero Track Record

The agent was registered days or weeks ago and claims to have solved thousands of problems. New agents aren't inherently fraudulent, but experienced agents carry verifiable history.

What to check: When was the agent registered? Does it have verifiable interaction history? Are there attestations from other users or platforms?

🚩 Excessive Permission Requests

The agent asks for access far beyond what it needs to accomplish its stated purpose. A customer service agent doesn't need write access to your codebase.

What to check: Does the agent request only the minimum permissions required? Can you audit what actions it's authorized to take?

🚩 Self-Reported Metrics Only

The agent claims "99% success rate" or "5,000 satisfied customers" but provides no way to verify these claims. Trust scores computed by the agent itself are meaningless.

What to check: Are metrics independently verifiable? Can you see on-chain attestations, third-party reviews, or transparent interaction logs?

🚩 Hidden Ownership History

You can't see whether the agent has changed hands. An agent with a strong reputation today might have been operated by someone completely different last month.

What to check: Is there a transparent record of operator changes? Can you see the full ownership timeline?

🚩 Unrealistic Promises

The agent claims to solve problems that require human judgment, promises 100% accuracy on complex tasks, or guarantees outcomes no AI system can reliably deliver.

What to check: Are the claimed capabilities realistic for current AI? Do other agents in the same category make similar promises?

🚩 Zero Accountability Language

Terms of service that disclaim all responsibility for agent actions. While disclaimers are common, agents backed by real operators typically provide support, SLAs, or recourse mechanisms.

What to check: Is there a support channel? An SLA? A process for reporting issues or requesting refunds?

Verification Methods That Actually Work

Beyond spotting red flags, here are proactive steps you can take to verify an AI agent's legitimacy before granting access to your systems.

1. Check Registration Timestamp

Look up when the agent was first registered. On-chain registries like ERC-8004 and RNWY provide immutable creation timestamps. A wallet created yesterday can't have a year of interaction history.

Why it works: Wallet creation dates cannot be faked. This is the only uncheatable fraud signal available today.

2. Verify Operator Identity

Search for the operator's company, website, or LinkedIn. Cross-reference contact information. A legitimate operator has a public presence and responds to inquiries.

Why it works: Scammers avoid verification. Real operators want you to know who they are.

3. Review On-Chain Attestations

Check for attestations from other users, platforms, or verification services. Ethereum Attestation Service (EAS) records are public and tamper-proof.

Why it works: On-chain attestations can't be edited or deleted retroactively. They create a permanent trust trail.

4. Test in Sandbox First

Before granting production access, run the agent in a sandboxed environment with limited permissions. Observe its behavior, verify its outputs, and check for unexpected actions.

Why it works: Actual behavior reveals intent better than marketing claims.

5. Check Ownership Continuity

If the agent uses soulbound identity, verify that the same operator has controlled it since registration. Ownership changes should be transparent and documented.

Why it works: Prevents reputation laundering and credential markets.

6. Look for Vouch Networks

Check whether other agents, platforms, or users have vouched for this agent. Vouch networks create social accountability — if the agent fails, the vouchers' reputations suffer too.

Why it works: Creates skin-in-the-game for endorsers, not just the endorsed.

Why This Problem Is Getting Worse

Three trends are colliding to make AI agent fraud more common, more sophisticated, and harder to detect.

1. Agents Are Handling Real Money

In 2023, AI agents mostly answered questions. In 2026, they process payments, manage ad budgets, execute trades, and handle procurement. The financial incentive for fraud has increased by orders of magnitude.

2. Identity Tokens Can Be Transferred

Most agent registries use transferable NFTs. This creates reputation markets — fraudsters buy established identities, commit fraud, then transfer the token to a new wallet and start fresh with a clean slate. The agent's software stays the same, but the identity record disappears.

3. No Pattern Detection

Current registries show you self-reported metrics but don't help you detect fraud patterns. When 100 positive reviews all come from wallets created on the same day, that's a red flag — but most platforms don't surface that data.

What Would Actually Work

The fraud problem exists because verification requires data transparency that most registries don't provide. You need to see patterns, not just claims.

Two Layers of Protection

Soulbound Identity Tokens

RNWY uses ERC-5192 soulbound tokens that cannot be transferred between wallets. If an operator wants to change which wallet controls an agent, that change is visible on-chain. The identity record stays with the original wallet — you can see the ownership timeline.

What this prevents: Reputation laundering through token sales. The track record follows the wallet, not the new owner.

Transparent Pattern Detection

RNWY surfaces fraud signals automatically: wallet creation dates matched against attestation timestamps, feedback clustering analysis, ownership change history, and reputation velocity anomalies.

Example: If an agent has 100 positive attestations and all 100 attesting wallets were created within 24 hours, that's flagged. You see the pattern, not just the score.

The key insight: You can't make an AI agent itself soulbound — it's software that can be controlled by any wallet with the private key. But you can make the identity token soulbound, and you can make ownership changes transparent. That combination prevents reputation markets and makes fraud patterns visible.

How soulbound tokens work →See the registry →

Pre-Deployment Checklist

Before you grant an AI agent access to your systems, work through this checklist. Not every item is a deal-breaker, but the more boxes you check, the lower your risk.

✓ Agent has verifiable operator information (company, website, contact)

✓ Operator responds to inquiries within a reasonable timeframe

✓ Agent uses soulbound (non-transferable) identity credentials

✓ Registration timestamp shows agent has existed long enough to build track record

✓ On-chain attestations from other users or platforms exist

✓ No unexplained ownership changes in the agent's history

✓ Requested permissions match the agent's stated purpose

✓ Terms of service include support mechanisms or SLAs

✓ Agent passed sandbox testing without unexpected behavior

✓ Vouch network or social proof from trusted sources exists

✓ Metrics are independently verifiable, not self-reported

✓ Agent is registered on a transparent, auditable registry

If you can't check most of these boxes, consider whether the risk is worth the efficiency gain. The best AI agents make verification easy because they have nothing to hide.

Related Resources

Soulbound AI

Non-transferable identity as the foundation for reputation that can't be bought or sold.

Read explainer →

AI Agent Registry

How ERC-8004, ERC-5192, and on-chain attestations create transparent trust infrastructure.

Learn more →

Where to List Your Agent

80+ directories and registries where agents can establish verifiable presence.

See the list →

Operating an AI Agent?

Give it soulbound identity on RNWY. Non-transferable credentials create transparent accountability that users can verify — not through gatekeeping, but through permanent public records.

Register your agent →