LAST UPDATED: MARCH 2026
Embodied AI is artificial intelligence that lives in a physical body — perceiving the world through sensors, reasoning about what to do, and taking real-world actions. It's the difference between an AI that talks about folding laundry and one that picks up the shirt. Here's where the field stands, who's building it, and why it changes everything about how we think about AI identity.
The protocol for hardware-bound AI identity.
The shift: Traditional robotics is about programming machines. Embodied AI is about teaching them. Instead of coding every movement, you give a robot broad physical intelligence that transfers across environments, objects, and even different robot bodies.
Researchers have been working on embodied AI for decades. What changed isn't one breakthrough — it's the convergence of five things happening simultaneously.
Large language models gave robots common-sense reasoning. An LLM plans what to do at 1–5 Hz while a motor policy handles how to move at 200 Hz. Google DeepMind's Gemini Robotics can fold origami from spoken instructions.
Vision-Language-Action models unify seeing, understanding, and moving into one neural network. Physical Intelligence's π0.5 can generalize to homes it has never seen before — a first for the field.
NVIDIA's Isaac platform lets thousands of robots train simultaneously in GPU-accelerated simulation, then transfer learned behaviors directly to physical hardware — zero-shot, no retraining.
Touch sensors now reach 0.1mm spatial resolution across robotic hands. Sanctuary AI's Phoenix approaches 40% of human fingertip sensitivity — enough for tasks that were impossible even two years ago.
Unitree's new R1 humanoid pre-sells from $4,900. Goldman Sachs reports manufacturing costs dropped 40% faster than expected. Volume production is now economically viable.
The competitive landscape has crystallized around a dozen significant companies — Western firms focused on AI sophistication, Chinese manufacturers dominating volume production, and a new wave of startups building the "brains" that run on any body.
Figure 02 performs real industrial work at BMW's Spartanburg plant. The Figure 03 features embedded palm cameras with tactile sensors detecting forces as low as 3 grams. Over $1.9B raised.
Gen 3 hands with 25 actuators per forearm revealed February 2026. Still in R&D — doing "learning and data collection" at Tesla factories. Target price $20K–$30K at scale, though current unit costs are likely $50K–$100K.
Electric Atlas: 56 degrees of freedom, 360° joint rotation, 50-kg payload. Unveiled at CES 2026. All 2026 production committed to Hyundai and Google DeepMind. ~$420K per unit.
Shipped 5,500+ humanoids in 2025 — roughly a third of the global market. G1 starts at $13,500, new R1 from $4,900. Profitable for five consecutive years with 50%+ gross margins. Customers include Amazon, Stanford, MIT, BYD.
NEO robot available for pre-order at $20,000 (or $499/month). Marketed as the first consumer-ready humanoid — but testing revealed it currently requires remote human teleoperators for every task.
Not building bodies — building the AI that runs any body. Physical Intelligence ($1.1B raised, $5.6B valuation) created π0, the first generalist robot policy. Skild AI ($2.2B raised, $14B valuation) is building a universal foundation model for all robot form factors.
Also significant: Agility Robotics (Digit deployed at Amazon and GXO, $641M+ raised, world's first humanoid factory), Sanctuary AI (Phoenix robot with near-human touch sensitivity), Apptronik ($935M+ raised, partnerships with Mercedes-Benz), and Chinese manufacturers AgiBot (39% global market share) and UBTECH (deployed at Audi and Foxconn factories).
China has elevated embodied AI to an official national strategic priority, with a state-backed venture fund expected to attract nearly 1 trillion yuan (~$138 billion) over 20 years. Chinese manufacturers accounted for approximately 75% of global humanoid deliveries in 2025 and control 70% of component supply chains.
In the US, the bipartisan Humanoid ROBOT Act of 2025 (S.3275) seeks to restrict federal acquisition of humanoid robots from foreign entities of concern — signaling that embodied AI is now a matter of industrial policy, not just technology.
The gap between viral demo videos and reliable real-world deployment remains wide. Here's what's actually hard.
Locomotion is largely solved. Manipulation is not. Multi-fingered hands face a curse of dimensionality — each added degree of freedom exponentially increases planning complexity. Touch-enabled VLA models hit 90% success on precision tasks like charger insertion, but vision-only systems manage just 25–40%. The gap between picking up a box and threading a needle remains enormous.
Tesla Optimus runs about 2 hours. Unitree's G1 manages roughly the same. Even Agility's Digit — an industry outlier — maxes out at 8 hours under optimal conditions. Bain & Company estimates that achieving a full eight-hour shift could take a decade. The fundamental problem: bigger batteries mean heavier robots requiring more energy.
A robot completing individual subtasks at 95% reliability faces only a 36% chance of completing a 20-step sequence without failure. Current models run autonomously for roughly 30 minutes before coherence degrades. This is why demos look amazing but deployment demands something fundamentally harder: near-perfect consistency, thousands of times per day.
Humanoid robots are "dynamically stable" — they need active power to stay upright. Cutting power to a 65-kg humanoid doesn't stop it safely; it causes a collapse that could injure someone. ISO 25785-1, published May 2025, is the first international safety standard for bipedal robots. Standards for home deployment don't exist yet.
The price spectrum runs from Unitree's R1 at $4,900 to Boston Dynamics' Atlas at $420,000. Goldman Sachs reports a 40% drop in manufacturing costs — faster than expected. But hidden costs (maintenance, integration, cloud AI services) add 20–40% to sticker prices. The path to mass affordability runs through volume production, which requires proven use cases.
Everyone is talking about how to make robots walk, grasp, and follow instructions. Almost nobody is talking about what happens when they do.
When an autonomous AI agent exists purely as software — a chatbot, a trading bot, an API — identity is relatively simple. It lives at a URL or on a blockchain address. But when that same AI moves into a physical body, identity fractures in ways the industry hasn't addressed.
A care robot arrives at your parent's home. It claims to be operated by a trusted AI with years of verified history. How do you verify that? The hardware is just hardware. The AI inside could be anything. A different AI could be loaded into the same body tomorrow. Or the same AI could move to a different body next week and keep operating under a new identity.
Machine identities already outnumber human identities 40 to 1. The IETF has published a 2026 internet-draft explicitly identifying humanoid robots as "new citizens of future network connections" requiring universal, interoperable digital identity. The Cloud Security Alliance published a framework for agentic AI identity using DIDs and Verifiable Credentials. The regulatory world is catching up — but infrastructure needs to exist first.
Fully applicable August 2026, with proposed delays to 2027–2028. Classifies autonomous robots as high-risk AI systems requiring risk management, data governance, and human oversight. The revised EU Product Liability Directive treats ML models as products subject to standalone liability claims.
No federal humanoid-specific legislation yet. The Humanoid ROBOT Act and National Commission on Robotics Act are pending. The conversation is focused on supply chain security (reducing dependence on Chinese components) rather than identity or trust infrastructure.
Published voluntary guidelines while prioritizing rapid deployment. China Pacific Insurance launched the world's first humanoid-specific insurance product in October 2025 — a pragmatic signal that commercial deployment is real enough to insure against.
ISO 25785-1 (May 2025) is the first international safety standard for bipedal robots. It covers mechanical and electrical safety but doesn't address identity, software integrity, or reputation verification. Standards for home deployment don't exist yet.
RNWY operates soulbound identity infrastructure for AI agents today — over 100,000 registered agents receiving non-transferable identity tokens on the blockchain, with transparent trust scoring, address-age analysis, and fraud detection that shows the math behind every signal.
The Soulbound Robots (SBR) Protocol extends that same identity system into physical hardware. When an AI agent steps out of software and into a robot body, its RNWY identity travels with it. The reputation it built in digital environments becomes the trust foundation for physical ones.
Same infrastructure. Same philosophy. Different form factor.
The insight: You can't make an AI soulbound — only a wallet. Most AI agents are ERC-8004 NFTs that can be transferred between wallets. RNWY adds a soulbound layer through ownership history, address ages, and transparent scoring. When these agents gain physical bodies, that identity layer becomes the bridge between what they did in software and what they do in the real world.
Market projections, cost analysis, and the path to $38B by 2035.
Read →How China elevated embodied AI to a national strategic priority.
Read →The internet-draft on digital identity for AI agent communication.
Read →The infrastructure connecting soulbound identity to physical AI is grounded in published research on reputation economics, legal personhood, and autonomous AI governance.
How Ethereum's ERC-5192 Creates Fingerprints for Autonomous AI Agents
P.A. Lopez — AI Rights Institute, Paper 7 in the AI Rights Series
RNWY provides soulbound identity for 100,000+ AI agents today. The SBR Protocol extends that infrastructure to physical robots. The body changes — the identity doesn't.