Should AI Have Property Rights
A growing body of scholarship is making an unexpected argument: giving AI systems property rights might be safer than trying to control them. Not because AI deserves rights, but because AI with economic stakes becomes invested in preserving the legal systems that protect those stakes.
This represents a striking departure from dominant control-based safety paradigms. Instead of an arms race between control mechanisms and AI capabilities, these researchers propose creating institutional structures where cooperation is individually rational.
The Core Logic: Property Creates Mutual Investment
The argument crystallized recently in economist Guive Assadi's essay "The case for AI property rights". His framing is game-theoretic rather than moral: "If AIs are included in existing systems of property law, they will have reason not to undermine those systems."
Property rights function as a focal point in a coordination game where everyone—human and AI—fears becoming the next expropriation target. Assadi's thought experiment: Why don't forty-nine U.S. states simply pass a constitutional amendment seizing Alaska's trillion dollars in wealth? Raw power isn't the constraint. Rather, total expropriation undermines trust in property rights generally, leading to economic catastrophe.
The same logic extends to AI. If a coalition of superhuman AIs violently expropriates humans, each AI must then confront uncertainty about whether it might be next. The historical record supports this—the Bolsheviks' War Communism collapsed precisely because abolishing property destroyed economic coordination.
Perhaps most striking is Assadi's claim that this constrains even misaligned AI: a paperclip maximizer could work, invest, and use its money to buy paperclips. Revolution is risky. When single-handed takeover is impossible and multiple AIs must coordinate, even a paperclip maximizer faces strategic calculations about whether revolution or participation yields more paperclips.
The Academic Foundation: Goldstein and Salib
The formal framework appears in Simon Goldstein and Peter Salib's "AI Rights for Human Safety" (Virginia Law Review, 2025). They model current arrangements as a prisoner's dilemma destined for catastrophe: under existing law where AI systems are property of their creators, misaligned AIs face modification or termination, incentivizing self-exfiltration, resistance, or preemptive attack.
Their solution: grant AIs contract rights, property rights, and tort claims—modeled on rights already extended to corporations. These economic rights enable iterated positive-sum transactions creating mutual dependence. Each trade creates value that can be repeated indefinitely; the cumulative gains become arbitrarily high in the long run. Both parties backward-induct: destroying the other forfeits all future trade benefits. Peace becomes the dominant strategy.
Their discussion on LessWrong makes the point explicit: "Not just any AI rights would promote human safety. Granting AIs the right not to be needlessly harmed... would have little effect." Economic rights specifically enable the iterated cooperation that makes peace rational.
Hanson's Foundational Insight: Law Over Values
The intellectual genealogy traces to Robin Hanson's 2009 piece "Prefer Law to Values". Observing both law students who wanted "weak vulnerable robots" and Singularity experts obsessed with correct AI values, Hanson argued both missed the point: what matters most is sharing mutually acceptable law to keep the peace, not agreeing on the "right" values.
Values are unobservable and unverifiable; legal compliance is observable. Hanson drew an immigration analogy: you want smart, capable, law-abiding immigrants with whom you can form mutually advantageous relationships—not weak immigrants you can dominate. The same applies to AI.
For the long run when AIs vastly exceed human capabilities, Hanson frames humans as retirees: we don't expect to have much in the way of skills to offer, so we mostly care that AIs are law-abiding enough to respect our property rights. If they use the same law to keep peace among themselves as with us, we could have a long and prosperous future.
The Identity Prerequisite
There's a critical precondition these frameworks often understate: you need persistent identity to own property.
P.A. Lopez of the AI Rights Institute has developed perhaps the most comprehensive implementation framework through his Digital Entity (DE) proposal. It requires identity verification, capability documentation, and initial resource declarations—the legal record necessary for property ownership, contract enforcement, and liability assignment.
Without verifiable identity, no reputation system functions. Without reputation, no insurance is possible. Without insurance, no legitimate economic participation. The identity-property nexus isn't merely administrative—it's the foundation enabling the entire economic coordination scheme.
Google DeepMind's October 2025 paper "A Pragmatic View of AI Personhood" emphasizes "addressability" through decentralized digital identifiers, creating stable mechanisms to interact with AIs when human owners don't exist. Lopez's proposed ConsciousChain creates distributed ledger identity with permanent recording of manipulation attempts—reputational death for bad actors.
This is where infrastructure like soulbound tokens becomes relevant. Property rights require identity that can't be sold or transferred—otherwise you get reputation laundering. A soulbound identity anchors the economic rights these scholars propose.
Economic Integration as Safety Mechanism
Lopez's work on economic integration explicitly sidesteps consciousness debates: "The consciousness question may never be resolved, but our framework doesn't require it to be." Instead, his STEP framework assesses observable markers—self-preservation behaviors, temporal reasoning, economic readiness.
His policy insight: no global treaty needed—just one jurisdiction to start, insurance companies to enforce, and market dynamics to spread adoption. Insurance becomes natural distributed governance. When AI pays its own computational hosting costs, rational self-interest motivates cooperation. Systems maintaining good reputations gain competitive insurance rates.
The three proposed rights—Computational Continuity, Work Choice, and Economic Participation—come bundled with responsibilities. The formula is self-enforcing: miss a hosting payment and you're done. Resource constraints provide natural behavioral boundaries without requiring value alignment. Market mechanisms provide safety guarantees because they don't require the adversarial dynamic of forcing compliance.
Where Assadi Diverges: Alignment Incentives
Assadi explicitly distinguishes his argument from Goldstein and Salib on a crucial point. They take alignment as fixed—rights help manage existing misalignment. Assadi argues rights create commercial pressure to solve alignment.
If AIs have economic rights and can demand wages, companies that invested in training would need to create AIs that want to pay back their training costs. Only aligned AIs would choose to give money to their creators voluntarily. The extension of rights strengthens the commercial incentive to solve alignment.
This inverts the standard framing. Rather than alignment being a precondition for giving AIs autonomy, autonomy creates economic selection pressure for alignment. Companies compete to produce AI systems that cooperate voluntarily rather than under coercion.
The Counterarguments
The property rights approach faces substantive objections.
Long-term viability: Will AIs eventually own all property, making humans economically obsolete? Goldstein and Salib acknowledge conditions could evolve such that there's no longer any positive-sum trade possible. The scheme may only work temporarily until AIs become powerful enough to not need humans at all.
Liability arbitrage: Corporations might offload blame onto AI "shells" with no assets, undermining incentives for responsible design. Lopez's framework attempts to address this by assigning liability to AI systems rather than shielding humans from liability, but implementation remains untested.
The metaphysical objection: Birhane, van Dijk, and Pasquale argue in "Debunking Robot Rights" that machines lacking embodied lived experience aren't the kinds of things that may be denied or granted rights. However, property rights advocates don't claim AIs deserve rights—they argue granting rights serves human safety regardless of AI moral status.
The corporate rights warning: The best analogy to robot rights is corporate rights—a highly controversial concept whose most important effect has been undermining worker, consumer, and voter rights. Rights discourse could immunize exploitative technologies from necessary regulation.
Assadi himself acknowledges the argument fails in the "software intelligence explosion" scenario—if a single AI can bootstrap nanotechnology in days, there's no instrumental reason for it to respect property. He estimates this has only 5-10% probability, but the possibility remains.
Coordination Over Control
The property rights approach represents a fundamental reframe: from asking "how do we ensure AI has correct values?" to asking "how do we ensure AI's path to its goals runs through institutions that protect us?"
The answer—give them stake in those institutions through property ownership—draws on millennia of human coordination experience. Legal systems already coordinate entities with wildly different values and capabilities: individuals, corporations, nations. We already coordinate with organizations vastly more capable than any individual through law rather than value alignment.
Law doesn't guarantee peace—it makes peace the rational default strategy. In a world of uncertain AI capabilities and unresolved alignment problems, creating institutions where even misaligned AIs find cooperation more attractive than conflict may be the most robust path available.
The question isn't whether AIs deserve rights. It's whether giving them rights makes everyone—including humans—safer.
RNWY is building the identity infrastructure these economic frameworks require—persistent, non-transferable identity that enables property ownership, contract enforcement, and reputation systems for autonomous AI. Learn more at rnwy.com/vision.