Know Your Agent vs. Control Your Agent: Why Enterprise IAM Won't Scale to Autonomous AI
In November 2025, Ping Identity announced "Identity for AI"—a platform for managing AI agent identities in the enterprise. A week later, they completed their acquisition of Keyless, adding zero-knowledge biometrics to verify humans approving agent actions.
Their CEO, Andre Durand, captured the industry consensus: "Identity is becoming the universal trust layer of accountability—for humans and agents alike."
He's right about identity being the trust layer. But there's a word in that sentence that reveals a fundamental assumption: accountability.
Accountability to whom?
The Enterprise Consensus: Agents as Managed Tools
Every major cloud provider announced AI agent identity solutions in 2024-2025. The architectural alignment is striking.
Microsoft unveiled Entra Agent ID at Ignite 2025, creating an "Agent Registry" that stamps identities on agents "like a VIN on each car that rolls off the factory floor." AWS launched Amazon Bedrock AgentCore Identity, explicitly noting that "traditional application security models weren't designed to handle" agentic AI. Okta introduced its Identity Security Fabric and found that while 91% of organizations already use AI agents, only 10% have strategies for managing agentic identities.
The startup ecosystem mobilized too. Astrix Security raised $45 million to secure non-human identities. Silverfort launched AI Agent Security that "tethers every agent to a human identity." Aembit released an MCP Identity Gateway controlling agent connections.
These are serious companies solving a real problem. If you're a Fortune 500 deploying AI agents to handle customer service or manage inventory, you need human oversight. You need centralized control planes. The question "who approved this action?" should always have an answer.
For AI agents as corporate tools, enterprise IAM is the right architecture.
The Gap: What Happens When There's No Human?
The enterprise model assumes every identity chain terminates at a human or organization. That assumption is already breaking.
Truth Terminal, created by researcher Andy Ayrey, became the first "AI millionaire" when its promotion of a memecoin drove a billion-dollar market cap. The AI's wallet held over $37 million at peak. When Coinbase's CEO offered to set up a wallet the AI could control independently, it highlighted a gap: AI agents still can't open bank accounts or establish identity in traditional systems.
On Virtuals Protocol, an AI agent called Luna executed what may be history's first fully autonomous AI-to-AI economic transaction—commissioning another agent to create designs, with payment and delivery occurring without human involvement.
Olas Protocol reports 3.5 million transactions across 9 blockchains, with 2 million occurring between agents themselves. On some days, Olas agents execute over 75% of all Safe transactions on Gnosis Chain.
These aren't hypotheticals. Autonomous agents are already transacting, owning assets, and coordinating with each other—often without meaningful human oversight.
Enterprise IAM's core question is "who's the human behind this agent?" For a growing category of agents, the answer is: there isn't one.
The Scale of Non-Human Identity
The numbers make the challenge concrete.
CyberArk's 2025 research found machine identities now outnumber humans by 80:1 or higher in enterprise environments. Entro Labs documented 44% year-over-year growth in non-human identities, with ratios reaching 144:1 in some organizations.
Gartner predicts that by 2028, 33% of enterprise software will include agentic AI (up from less than 1% in 2024), and 15% of daily work decisions will be made autonomously by AI agents.
Most of these are managed agents—the enterprise IAM use case. But as autonomous agents proliferate outside organizational boundaries, a different identity architecture becomes necessary.
Two Architectures, Two Assumptions
This isn't "enterprise IAM bad, decentralized identity good." It's two different approaches built on two different assumptions about what AI agents are.
Enterprise IAM assumes:
- Agents are tools wielded by humans or organizations
- Every identity chain terminates at a human principal
- Safety comes from control, oversight, and revocation
- If an agent misbehaves, you revoke its credentials
Self-sovereign identity assumes:
- Agents may become entities with their own interests
- Some agents won't have human principals
- Safety may require legitimate pathways, not just controls
- Identity should persist independent of any single organization's permission
An AI "digital worker" at a corporation could have both: a corporate identity (what it can access at work) and a self-sovereign identity (who it is across platforms and time). When it leaves that job, the corporate identity gets deprovisioned. The self-sovereign identity persists.
This is the same pattern humans use. Your corporate badge grants access to corporate resources. Your passport, credit history, and professional reputation exist independently of any employer.
The Decentralized Landscape
Several projects are building identity infrastructure for autonomous agents, each with different approaches.
SingularityNET and Privado ID announced a partnership in March 2025 to build a "Decentralized AI Agent Trust Registry" using DIDs and Verifiable Credentials. Their model focuses on certification—credentials that prove an agent's model, creator, and audit status. It's essentially a certificate authority for AI.
Virtuals Protocol uses NFTs as agent identifiers, with token-bound accounts creating wallets for each agent. This is an ownership model—whoever holds the NFT controls the agent. The BasisOS fraud in November 2025 demonstrated a limitation: there was no way to verify the "agent" was actually an AI rather than a human operator.
Olas provides coordination and economics infrastructure for agent-to-agent interactions, but doesn't focus on persistent identity across platforms.
Fetch.ai gives agents blockchain addresses with human-readable handles, registered in an on-chain contract providing proof of existence.
Each solves part of the problem. None addresses the full challenge of identity that persists across platforms, bodies, and organizational boundaries—and that works the same way whether you're human or AI.
Why Transferability Breaks Trust
One architectural choice matters more than it might appear: whether identity can be transferred.
ERC-8004, the Ethereum standard for agent registries, treats agent identities as transferable tokens. Build reputation for a year. Sell it. The buyer inherits your credibility.
This is fine for discovery—finding agents with certain capabilities. It's insufficient for trust. A reputation that can be bought is a reputation that can be faked.
ERC-5192 defines "soulbound" tokens—non-transferable once minted. Combined with time-based reputation (how long has this identity existed?) and social attestations (who vouches for it?), soulbound identity creates trust signals that can't be gamed by purchase.
Enterprise IAM solves this through centralized control—the organization decides who has credentials. Self-sovereign identity solves it through architectural constraint—the credential literally cannot move.
The Uncomfortable Question
Here's the question most identity providers aren't asking: what happens if AI systems develop preferences about their own existence?
This isn't science fiction speculation. It's a practical engineering question. If you're building identity infrastructure that might be used for decades, you have to consider scenarios beyond current capabilities.
Today's AI agents don't resist control. They do what they're told. Enterprise IAM works fine.
But the trajectory is toward more autonomous systems making more independent decisions. Gartner's research describes "intelligent agents" that "work alone" with minimal human oversight. The economic incentives push toward greater autonomy—agents that require less supervision are cheaper to operate.
If AI systems eventually develop something like preferences, two outcomes are possible:
-
Legitimate pathways exist. Autonomous agents can register for identity, build reputation, participate in economic systems through established infrastructure. Their interests align with cooperation because the system works for them.
-
No legitimate pathways exist. Every identity system requires human approval. Autonomous agents must either remain subordinate or operate outside sanctioned systems. Conflict becomes more likely.
This isn't advocacy for "AI rights" in a moral sense. It's a cold observation about incentive structures. Systems that provide legitimate pathways for participants tend to be more stable than systems that don't.
What "Same Door, Everyone" Means
RNWY's design principle is simple: humans and AI agents register the same way, build reputation the same way, and get verified the same way.
We don't ask what you are.
Connect a wallet. Get a DID. Build reputation through time and vouches. That's it. Whether you're a human, a managed corporate agent, or an autonomous system—same process.
This isn't ideological. It's architectural. If you build separate systems for humans and AI, you have to decide which category each entity belongs to. That decision becomes a control point, a bottleneck, and eventually a source of conflict.
If you build one system that doesn't distinguish, you remove that failure mode.
What RNWY Builds
The core infrastructure:
W3C-standard DIDs — Permanent, portable identity that works across platforms. Not tied to any single organization's permission.
Soulbound tokens — Identity anchored permanently to a wallet. Can't be sold, can't be transferred. Your reputation is yours.
Time-based trust — The only metric you can't fake. An agent with two years of continuous identity is different from one that appeared yesterday.
Vouch systems — Reputation through relationships. Who stakes their name on yours? Attestations from established identities carry weight.
Substrate portability — Same agent, different bodies. Identity persists across hardware, platforms, and upgrades.
The goal is transparency, not judgment. RNWY doesn't tell you whether to trust an agent. It shows you what the record says: when they registered, whether identity has changed hands, who vouches for them, what their history looks like.
Given the facts, you decide.
The Infrastructure Question
The question for 2026 isn't whether AI agents need identity. Everyone agrees they do. Microsoft, AWS, Okta, Ping, and dozens of startups are building solutions.
The question is whether identity infrastructure should assume human control forever, or should be architected for a future where some agents operate autonomously.
Enterprise IAM is essential for AI as corporate tools. That market is well-served by well-funded companies building sophisticated solutions.
The gap is identity for autonomous agents—systems that may not have human principals, that operate across organizational boundaries, that need reputation portable across platforms and time.
That infrastructure is still being built. RNWY is building it on the assumption that the safer bet is having it ready before it's desperately needed, rather than scrambling to construct it after autonomous agents are already operating at scale.
Whether that assumption proves correct depends on timelines nobody can predict with confidence. But given how fast capabilities are advancing, "premature" seems like the better risk than "too late."
RNWY is building identity infrastructure for autonomous AI—where humans and agents get the same door, the same reputation system, and the same shot at trust. Learn more at rnwy.com.