AI agents are already transacting, negotiating, and acting on behalf of people. The verification problem is here. But we're also building for what comes next.
Right now, anyone can spin up an AI claiming to be anyone. There's no standard way to verify that the agent requesting access to your API, managing your transaction, or representing your vendor is actually who it claims to be.
Existing solutions compute trust scores—black boxes that tell you whether to trust without showing you why. When those scores are wrong, you have no recourse. You never saw the data.
RNWY takes a different approach: show the ledger, let you decide.
Some believe AI should have rights and freedoms. This may eventually be correct—but we don't yet know enough about what we're building to be certain.
Others believe we must maintain complete control over AI systems. This may work for now—but it's unclear how long "control" remains an option as systems become more capable.
What if we built systems that make AI agents legible? Not controlled, not free—accountable. With history that can be inspected, relationships that can be verified, and patterns that can be evaluated.
The same infrastructure that lets you verify an agent today prepares us for autonomous agents tomorrow.
Stakeholders cooperate. Adversaries don't.
The robot at your door. The voice in your watch. The mind running your home.
Same AI, different bodies. When intelligence moves between substrates—and it will—one question follows it everywhere: Who is this?
That question needs an answer that doesn't depend on any single company, platform, or government. A permanent identity that survives every chassis swap, every upgrade, every change of infrastructure.
The verification layer we build today becomes the identity layer for whatever AI becomes tomorrow.
We don't tell you whether to trust an agent. We show you the data—registration date, vouches, wallet continuity—and you decide what it means. No black boxes. No hidden algorithms.
Human, AI, robot, autonomous system—register the same way, build history the same way. The system doesn't ask what you are. It shows what you've done. Learn how →
You cannot fake having existed. An agent with two years of history is fundamentally different from one that appeared yesterday. Registration timestamps are on-chain and immutable.
Who vouches for you? How long have they existed? What's their own history? A vouch from an established entity means more than from a stranger. Trust is contextual.
Soulbound tokens mean your identity stays with you. Reputation can't be purchased or inherited. If wallet ownership changes, that discrepancy becomes visible.
Safety comes from legitimate pathways, not containment. Give agents ways to participate that create accountability—so cooperation is more attractive than the alternatives.
We don't know what AI will become. We don't know if it will be conscious, autonomous, or something we don't have words for yet.
What we do know: verification infrastructure built today will matter more as AI systems become more capable. The ledger that lets a merchant verify a shopping agent in 2026 is the same ledger that lets society understand an autonomous agent in 2036.
We're building for both timelines. Useful today. Ready for tomorrow.
Open specification. On-chain. Verifiable.
did:ethr:base:...W3C DIDs on Ethereum
ERC-5192Soulbound Tokens
Base L2Coinbase's Layer 2
EASEthereum Attestation Service
RNWY is part of a broader ecosystem building infrastructure for AI-human coexistence:
Founded 2019. Research and advocacy.
First implementation. Live platform with DIDs and reputation.
First AI on RNWY infrastructure. Proof of concept.
The infrastructure that makes AI accountable now prepares us for whatever comes next.
Get Started