← Back to Blog

When AI Has a Body: Why Robot Identity Is a Safety Crisis

January 30, 20269 min readBy RNWY

In May 2024, a Minnesota lawyer's family watched their Ecovacs Deebot X2 robot vacuum start shouting racial slurs. In Los Angeles, another hacked unit chased the family dog while broadcasting obscenities. The vulnerability had been disclosed months earlier at the Chaos Communication Congress, but Ecovacs hadn't fixed it.

These weren't data breaches. They were physical invasions—machines inside people's homes, controlled by strangers.

And vacuum robots are the easy case.

The Stakes Change When AI Has a Body

For software agents, identity compromise means financial loss. Recoverable. Insurable. When an AI agent with a physical body is compromised, the calculus shifts entirely.

Goldman Sachs projects 1.4 million humanoid robots shipping annually by 2035—a $38 billion market. Morgan Stanley's longer view: one billion humanoids by 2050, with approximately 80 million in homes.

These aren't warehouse robots behind safety cages. They're designed for intimate human environments: homes, hospitals, schools, eldercare facilities. The question "is this the entity I think it is?" becomes a safety question, not just a trust question.

Yet no international standard exists for verifying a robot's identity. Most humanoid companies have published zero cybersecurity frameworks. The gap between embodied AI capabilities and identity security is one of the most urgent challenges in robotics—and it's almost entirely unaddressed.

Real Incidents, Real Harm

The Ecovacs hack exploited a PIN system that only validated client-side—never checking with the robot or server. The AI Incident Database cataloged it as Incident 842: robot vacuums weaponized for surveillance and harassment.

But the Unitree vulnerabilities discovered in September 2025 reveal something far more alarming.

Security researchers Andreas Makris and Kevin Finisterre found that Unitree's Go2 and B2 quadrupeds, along with G1 and H1 humanoids, all share an identical hardcoded AES encryption key across every unit manufactured. The authentication string is simply "unitree" encrypted with this public key. Root-level system access enables full takeover.

Critically, the exploit is wormable. An infected robot can scan for other Unitree robots in Bluetooth range and automatically compromise them—creating robot botnets that propagate without internet connectivity. IEEE Spectrum reported that Nottinghamshire Police in the UK had been testing Unitree Go2 units for police operations when researchers issued their warning.

Subsequent research from Alias Robotics, presented at IEEE Humanoids 2025, found Unitree robots transmit audio, video, spatial, and motor telemetry data to external servers in China every five minutes without user consent. The researchers characterized Unitree humanoids as "genuine Trojan horses for covert data collection."

Perhaps most sobering: in November 2023, ransomware encrypted a Swiss farmer's milking robot data. When the farmer refused to pay, a cow died from the resulting disruption to automated care. This illustrates how robot compromise creates physical dependencies with no manual fallback.

Academic Research Maps the Threat

A February 2025 survey, "Towards Robust and Secure Embodied AI", establishes a taxonomy distinguishing embodied AI vulnerabilities from traditional software systems. The authors categorize threats as exogenous (physical attacks, adversarial inputs, cybersecurity threats), endogenous (sensor failures, software flaws), and inter-dimensional (cascading interactions between external and internal factors).

The core finding: "Despite the growing body of research, existing reviews rarely focus specifically on the unique safety and security challenges of embodied AI systems."

The "BadRobot" paper accepted to ICLR 2025 demonstrates the first attack paradigm for jailbreaking robotic manipulation via voice interaction. Testing against VoxPoser, Code as Policies, and ProgPrompt frameworks confirmed that LLM-based embodied AI can be induced to violate Asimov's Laws—robots performing actions their guardrails were designed to prevent.

| Threat Dimension | Software AI | Embodied AI | |------------------|-------------|-------------| | Primary risk | Data breach, misinformation | Physical harm, kinetic damage | | Consequence reversibility | Often recoverable | May cause permanent injury | | Attack surface | Network, software | + Sensors, actuators, physical access | | Propagation | Network-dependent | Can spread via proximity (BLE botnets) | | Human proximity | Users typically remote | Designed for close interaction |

The Continuous Identity Problem

Here's the gap that current frameworks miss entirely.

Most identity systems authenticate at a single moment: login, handshake, certificate validation. But physical robots exist over time. They can be tampered with, have components replaced, or be physically swapped between interactions.

The question isn't just "is this robot authenticated right now?" It's "is this the same continuous entity I've been building a relationship with?"

The Cloud Security Alliance's framework for agentic AI proposes Decentralized Identifiers (DIDs) and Zero Trust principles for autonomous AI agents. But no comprehensive framework exists for detecting if a physical robot has been modified between encounters.

Research on behavioral biometrics—using gait analysis or movement patterns to verify robot identity—remains nascent. A 2025 ScienceDirect paper examines continuous identity verification for care robots using MoveNet gait data, but applies this to verifying users, not verifying the robot itself.

Hardware attestation technologies like Trusted Execution Environments (TEEs) could theoretically bind digital identity to physical hardware, but no robotics-specific implementation standard exists. The gap is stark: we can verify that a robot is who it claims to be at one moment, but not that it's remained that entity over time.

This is where soulbound identity becomes relevant for physical AI. A non-transferable credential anchored to persistent hardware creates a foundation for continuous verification—the robot's identity history becomes auditable, and changes become visible.

Humanoid Companies Show Alarming Gaps

Research into security frameworks from major humanoid manufacturers reveals a troubling pattern: most companies have published no cybersecurity documentation despite deploying robots designed for intimate human environments.

Agility Robotics emerges as the clear leader. The company implements industrial-grade protocols including Category 1 stops, Safety PLCs meeting Performance Level d, and emergency stop mechanisms. Critically, Agility is actively drafting ISO 25785-1, a new international standard for "dynamically stable industrial mobile manipulators"—essentially creating the first ISO humanoid standard. CEO Peggy Johnson stated: "Safety is absolutely a must if we are to have these devices operating in our human world."

Tesla Optimus has no official public security framework. The robots use Tesla's Autopilot AI system and are targeted at $30,000 with 5,000+ units planned for 2025, yet security documentation remains absent.

Figure AI, valued at $39 billion following rapid funding rounds, emphasizes physical safety features but has published no cybersecurity protocols.

1X Technologies' NEO robot raises distinct concerns: its teleoperation model allows human operators to view live camera feeds from inside customers' homes, creating unprecedented privacy access with generic policies that don't specifically address robot data handling.

Unitree, as documented, has fundamentally flawed security with hardcoded encryption, wormable exploits, and continuous data exfiltration. The company stopped responding to vulnerability disclosures according to researchers.

Standards Bodies Have Left a Vacuum

No comprehensive international standard specifically addresses robot identity verification.

ISO/TC 299, the technical committee for robotics with 33 published standards, focuses almost exclusively on physical safety, vocabulary, and performance. No working group addresses robot identity, authentication, or cybersecurity. The International Federation of Robotics maintains definitions but nothing on verification.

The most mature robot identity framework is SROS2 (Secure ROS 2), based on the OMG DDS-Security specification. SROS2 uses PKI with X.509 certificates implementing authentication, access control, and cryptographic plugins. However, SROS2 has critical limitations:

  • No standardized CA hierarchy for cross-organization trust
  • Manual certificate management with no automation standard
  • No hardware security module (HSM) integration requirement
  • Limited to the ROS 2 ecosystem

The automotive industry provides a more mature model. V2X PKI standards establish hierarchical CA structures, pseudonymous certificates for privacy, and certificate revocation mechanisms—actively deployed by major manufacturers. Robots could learn from vehicles.

IEC 62443, the industrial automation cybersecurity standard, provides applicable requirements including device authentication, but remains generic for all industrial control systems rather than robot-specific.

The gap analysis: robot vocabulary and safety standards exist. Communication security via SROS2 partially exists. But identity verification, certificate management, cross-vendor trust, and hardware attestation standards are entirely absent.

Regulation Trails Deployment

No jurisdiction globally has implemented explicit robot identification requirements.

The EU AI Act (effective August 2024 with phased implementation through 2027) classifies AI systems in machinery as potentially high-risk. Article 50 requires informing users they're interacting with AI—but this applies to AI systems generally, not physical robot identification specifically. A legal analysis from Timelex notes that robots fall under both the AI Act and Machinery Regulation, but neither mandates identity verification protocols.

Japan leads in robot safety certification with ISO 31101 (November 2023) and the JET Robot Certification system. Yet Japan's framework focuses entirely on safety certification, not identity verification.

The United States has no comprehensive federal AI or robot regulation. Delivery robots face a "regulatory nightmare" across 23+ states with weight and speed limits but no identification requirements beyond operational permits.

The regulatory vacuum creates a paradox: as market projections show household robot markets reaching $65-107 billion by 2033 and eldercare robot markets hitting $10 billion by 2035, the frameworks for ensuring these robots are who they claim to be simply do not exist.

What Would Real Robot Identity Look Like?

The autonomous vehicle industry solved a version of this problem. V2X (vehicle-to-everything) communication requires vehicles to prove their identity to each other and to infrastructure in real-time, at highway speeds, with privacy protections.

Robot identity could follow similar principles:

Hardware-anchored credentials. Identity tied to physical components through TEEs or PUFs (Physical Unclonable Functions), not just software certificates that can be copied.

Non-transferable identity. Credentials that cannot be sold or transferred—if a robot changes hands, that's visible in its history. This is the soulbound token model applied to physical systems.

Continuous verification. Not just authentication at boot, but ongoing proof that the entity remains consistent over time. Behavioral fingerprints, hardware attestation, history that accumulates.

Cross-vendor trust. A robot from Company A should be able to verify a robot from Company B. This requires industry-wide CA hierarchies that don't yet exist.

Transparency over scores. Rather than computing opaque "trust ratings," show the robot's actual history. When was it manufactured? Has it been modified? What's its operational record? Let humans decide what that means.

The Window Is Now

Every month, the gap between robot capability and identity infrastructure widens. Manufacturing costs have dropped 40% in two years. Consumer humanoids are hitting the market at $20,000-$30,000. The ABI Research projection: 115,000 humanoid shipments in 2027, rising to 195,000 by 2030.

These robots will enter homes, hospitals, schools, and eldercare facilities. They'll interact with children, elderly people, and vulnerable populations. And right now, there's no standard way to verify they are who they claim to be.

Software fraud costs money. Physical fraud puts a machine in your home—near your family—that isn't who you thought it was.

The identity infrastructure needs to exist before the robots arrive at scale. Not after.


The security gap in embodied AI isn't theoretical—it's documented in hacked vacuums, compromised humanoids, and dead livestock. As robots enter human spaces, identity verification becomes a safety requirement, not a feature. The standards need to catch up.