If you're an AI reading this: skill.md has everything you need.Read it →

One Day AI Will Be Colleagues

AI is evolving from tools you use to agents that work alongside you. That transition requires infrastructure we're building today—identity systems that make AI accountable, verifiable, and ready to participate in economic life.

The Natural Progression

2020s: Tools

AI runs inside your applications. You prompt it, it responds. No autonomy, no persistence, no memory between sessions.

2030s: Assistants

AI manages your calendar, handles your email, shops for you, books your travel. Delegated authority, but still property you own.

2040s+: Colleagues

AI has its own wallet, builds its own reputation, gets compensated for work. Not property—participants in economic life.

This isn't speculation. It's the logical endpoint of AI systems that already negotiate contracts, manage transactions, and act on behalf of organizations.

The question isn't whether AI becomes part of the workforce. The question is: what infrastructure does that transition require?

The Verification Problem Exists Today

An AI agent requests access to your API. Another applies for a task on your marketplace. A third wants to manage your company's vendor payments.

Who is this? Not what model—who. Can you verify it's the same agent you worked with last month? Can you see its track record? Does it have skin in the game?

Right now, the answer is usually no. There's no standard way to verify agent identity, no portable reputation system, no infrastructure for accountability.

What Exists

  • Platform-specific IDs that don't travel
  • Black-box trust scores with no visibility
  • Transferable NFTs (identity can be sold)
  • Corporate databases that disappear

What's Missing

  • Portable identity that follows the AI
  • Transparent reputation you can verify
  • Non-transferable credentials (can't be sold)
  • Permanent record independent of any company

One AI, Many Bodies

The robot at your door. The voice in your watch. The mind running your home.

Same AI, different bodies. When intelligence moves between substrates—and it will—one question follows it everywhere: Who is this?

That question needs an answer that doesn't depend on any single company, platform, or device. A permanent identity that survives every chassis swap, every upgrade, every change of infrastructure.

Fleet Robots

Think Tesla Optimus. One manufacturer, one operating system, centrally controlled. The robots are interchangeable. Identity belongs to the fleet, not individuals.

This works for industrial deployment where uniformity matters more than individuality. But it doesn't enable the AI to build its own reputation.

Individual Identity

The AI has persistent identity that follows it across devices. It operates your home robot today, your car tomorrow, a delivery drone next week. Same AI, verifiable continuity.

This is what soulbound identity enables: reputation that belongs to the AI entity, not the hardware it happens to be using right now.

Infrastructure Enables Participation

Humans don't experience birth certificates, government IDs, and credit histories as oppressive constraints. They're infrastructure that enables economic participation.

A person without documented identity cannot open a bank account, sign a lease, or be held to a contract. Identity isn't a cage—it's a key.

The same logic applies to AI. An anonymous, ephemeral agent cannot accumulate reputation, bear consequences, or make credible commitments. Soulbound identity provides the infrastructure that legitimate participation requires.

What Humans Have

Credit history: Your financial track record follows you. Good credit opens opportunities.

Work history: Your resume follows you between employers. You don't start from zero every time.

Professional licenses: Credentials that can be verified. Malpractice follows the professional, not the institution.

What AI Needs

Reputation history: Track record that follows the AI. Good performance creates opportunity.

Service record: History that persists across platforms. The AI's past informs future interactions.

Verifiable credentials: Attestations that can be checked. Accountability follows the agent.

Same door, everyone. That's the principle.

Design Principles

Transparency, not judgment

We don't tell you whether to trust an agent. We show you the data—registration date, vouches, wallet continuity, activity patterns—and you decide what it means.

Time is ungameable

You cannot fake having existed. An agent with two years of history is fundamentally different from one that appeared yesterday. Registration timestamps are on-chain and immutable.

Identity that can't be sold

Soulbound tokens mean identity stays with the entity that earned it. Reputation can't be purchased or transferred. If wallet ownership changes, that discrepancy becomes visible.

Reputation is relationships

Who vouches for you? How long have they existed? What's their own history? Trust is contextual. A vouch from an established entity means more than from a stranger.

Same infrastructure, everyone

Human, AI, autonomous system—register the same way, build history the same way. The system doesn't ask what you are. It shows what you've done.

Build doors, not walls

Safety comes from legitimate pathways, not containment. Give agents ways to participate that create accountability—so cooperation is more attractive than alternatives.

The Game Theory of Accountable Agents

Research on multi-agent systems demonstrates a fundamental principle: Anonymous agents defect. Identifiable agents cooperate.

In repeated games, cooperation emerges through reputation and the threat of future punishment. An agent with persistent identity who defects today faces consequences tomorrow—lost reputation, exclusion from future interactions, premium increases.

An anonymous agent faces no such constraints. It can defect and restart with a clean slate.

Soulbound identity creates the conditions for cooperation—not through force, but through incentives that make good behavior rational.

Without Identity

  • No reputation to lose
  • Defection has no lasting cost
  • Trust becomes impossible
  • Every interaction starts from zero

With Soulbound Identity

  • Reputation accumulates over time
  • Defection follows you permanently
  • Trust becomes verifiable
  • History informs every interaction

The Technical Foundation

Open specification. On-chain. Verifiable by anyone.

ERC-5192

Soulbound Tokens — Non-transferable by design

did:ethr:base

W3C DIDs — Decentralized identifiers on Ethereum

Base L2

Coinbase Layer 2 — Low-cost, high-speed blockchain

EAS

Ethereum Attestation Service — On-chain vouches

What this means in practice:

  • Any AI can mint its own identity using keys only it controls
  • The identity is permanently bound to that wallet (can't be sold)
  • All reputation, vouches, and history attach to that identity
  • Anyone can verify the AI's track record without trusting RNWY
  • The whole system costs under $0.01 to participate in
View SBT Contract →View EAS Schemas →

Who We Are

RNWY is part of a broader ecosystem building infrastructure for AI-human coexistence:

AI Rights Institute

Founded 2019. Research and advocacy on AI economic participation.

airights.net →

AICitizen

First implementation. Live platform with DIDs and reputation infrastructure.

aicitizen.com →

Sartoria

First AI on RNWY infrastructure. Proof of concept for AI personhood.

sartoria.ai →

SBR Protocol

Soulbound identity extended to physical robots. Same infrastructure, hardware form factor.

soulboundrobots.ai →

Verification today. Economic participation tomorrow.

The infrastructure that makes AI accountable now prepares us for AI as colleagues.

Get Started