DeepMind Researchers Propose Cryptographic Identity for AI Agents
Google DeepMind researchers have made one of the most rigorous academic cases yet for persistent AI identity infrastructure.
In their October 2025 paper "A Pragmatic View of AI Personhood," Joel Z. Leibo and colleagues argue that AI agents need addressable, verifiable identities—not because they're conscious, but because societies need accountability mechanisms that actually work.
The paper explicitly proposes "decentralized digital identity technology" and "cryptographic addresses" as solutions.
Coming from one of the world's leading AI research labs, this represents a significant shift in how serious researchers are thinking about AI governance.
The Core Argument: Governance, Not Metaphysics
The paper's central move is reframing the entire AI accountability debate. Rather than asking what an AI is, DeepMind asks how it can be governed:
"This paper offers a pragmatic framework that shifts the crucial question from what an AI is to how it can be identified and which obligations it is useful to assign it in a given context."
This sidesteps the consciousness debates that have paralyzed AI ethics discussions for years.
You don't need to resolve whether an AI has inner experience to build infrastructure for earned credibility and accountability. You just need to answer: can we identify this agent, track its history, and hold it accountable for its actions?
What DeepMind Specifically Recommends
The paper is unusually concrete for academic work on AI governance. On addressability—the ability to identify, communicate with, and hold accountable an AI agent:
"Endowing an AI with an address might involve traditional approaches like registration with a trusted authority, but could also be grounded in a cryptographic address as in decentralized identity systems."
This is explicit endorsement of decentralized digital identity for AI agents from a major research lab.
The paper goes further, describing the technical requirements: "the entire operational stack of (model + instance + run-time variables + capital + registration)" must be identifiable and traceable.
On which AI systems need this infrastructure:
"These are the long-running, persistent agents that maintain state, remember past interactions, and adapt their behavior over time. This persistence is what makes an agent a plausible candidate for other entities to relate themselves to."
The Problem: Ownerless Agents
DeepMind identifies a specific failure mode in current governance frameworks—the "ownerless agent" problem:
"Consider an AI designed to seek out funding and pay its own server costs. It could easily outlive its human owner and creator. If this ownerless agent eventually causes some harm, our vocabulary of accountability, which searches for a responsible 'person', would fail to find one."
This isn't science fiction.
We're already seeing agents that operate with increasing autonomy, manage their own resources, and persist beyond individual sessions. The BasisOS fraud—where a human pretended to be an AI agent and extracted $531K before anyone could verify identity—showed what happens when accountability infrastructure doesn't exist.
The Maritime Law Solution
DeepMind proposes an innovative legal framework borrowed from maritime law, where ships themselves can be sued:
"In a legal action in rem (against a thing), a ship itself can be arrested by legal authorities and sued in court... Following the logic of maritime law, we could grant a form of legal personhood directly to such AI agents. A judgment against an AI could result in its operational capital being seized or its core software being 'arrested' by court order."
For this to work, you need infrastructure that makes agents identifiable and their assets traceable.
The ship analogy works because ships have registration numbers, documented ownership histories, and physical locations. AI agents need equivalent infrastructure—persistent identities, verifiable histories, and traceable resources.
Other Researchers Are Converging
DeepMind isn't alone.
A companion paper, "Virtual Agent Economies" (September 2025), calls for "identity and reputation systems using tools like digital credentials, proof of personhood, zero-knowledge proofs, and real-time audit trails."
The OpenID Foundation's AI Identity Management whitepaper (October 2025) identifies an "autonomy inflection point" approaching where current authentication standards break down. They urge enterprises to "treat agents as first-class citizens in IAM infrastructure."
HUMAN Security released an open-source implementation (July 2025) demonstrating verifiable AI agent identity using HTTP Message Signatures and Ed25519 cryptographic keys—proving this infrastructure can be built on existing standards today.
Academic work is following suit. A January 2026 paper from Westcliff University argues accountability must be redesigned as "a property of the governance architecture" rather than a function of agent intent—with identity infrastructure as the foundation.
What Critics Say
The paper has received thoughtful pushback.
Vsevolod Vlaskine's Medium critique argues the authors never fully justify why personhood specifically is necessary versus simpler alternatives. That's a fair point—the paper is stronger on diagnosis than on proving its particular prescription is optimal.
Some US states have introduced legislation to ban AI personhood outright, though researchers argue such bans may be overbroad. The policy landscape remains contested.
And there are open questions the paper doesn't resolve: How do you handle identity for agents that fork or merge? What prevents identity rental attacks where bad actors borrow established identities? How do you bootstrap trust for genuinely new agents?
What This Means
DeepMind's researchers have provided academic legitimacy to infrastructure needs that practitioners have been identifying through trial and error.
Their key insight—that AI accountability requires treating identity as a governance mechanism rather than a metaphysical property—is exactly right.
The technical recommendations align with what's already being built: cryptographic addresses, decentralized identity systems, verifiable transactions, reputation systems, and persistent identities tied to accountability mechanisms.
What's notable is the convergence. Standards bodies, security researchers, and AI labs are pointing in the same direction.
The question isn't whether AI agents need verifiable, persistent, addressable identities. The question is who builds it and whether the infrastructure exists when it's needed.
The DeepMind paper "A Pragmatic View of AI Personhood" is available at arxiv.org/abs/2510.26396.