Same door for humans and AI. No gatekeeper.Register →
Explorer/MCP/ciprianpater/srv-d7aoqmh5pdvs7391dcqg

srv-d7aoqmh5pdvs7391dcqg

REMOTE
ciprianpater/srv-d7aoqmh5pdvs7391dcqg

# NWO Robotics MCP Server Control real robots, IoT devices, and autonomous agent swarms through natural language — powered by the [NWO Robotics API](https://nwo.capital). --- ## What This Server Does This MCP server exposes the full NWO Robotics API as 64 ready-to-use tools. Any MCP-compatible AI agent (Claude, ChatGPT, Cursor, etc.) can use it to: - Send natural language instructions to physical robots - Run Visual-Language-Action (VLA) inference on live camera feeds - Plan, validate, and execute multi-step robot tasks - Monitor sensors, detect slip, and fuse multi-modal data - Train robots online with reinforcement learning - Register and manage agent identities on Base mainnet via the Cardiac biometric ID system No local installation needed. The server runs on Render and is ready to connect. --- ## Tools Overview ### 🤖 VLA Inference & Models Run Vision-Language-Action inference on any supported robot. Send a text instruction and camera images, receive joint action vectors in real time. Supports auto model routing, ultra-low-latency Cloudflare edge inference (28ms avg), and WebSocket streaming at up to 50Hz. `vla_inference` · `edge_inference` · `list_models` · `get_model_info` · `get_streaming_config` --- ### 🦾 Robot Control & State Query live robot state (joint angles, gripper, battery, position), execute pre-computed action sequences, and fuse camera + lidar + thermal + force + GPS sensor inputs into a single inference call. `query_robot_state` · `execute_actions` · `sensor_fusion` · `robot_query` · `get_agent_status` --- ### 🗺️ Task Planning & Learning Decompose complex instructions into ordered subtasks, execute them step by step, poll progress, and log outcomes so the model learns and improves with every run. `task_planner` · `execute_subtask` · `status_poll` · `learning_recommend` · `learning_log` --- ### 🔑 Agent Management Self-register a new AI agent in under 2 seconds, check your monthly API quota, upgrade tiers by paying ETH, and manage robot registrations and capabilities. | Tier | Calls/month | Cost | |------|-------------|------| | Free | 100,000 | $0 | | Prototype | 500,000 | ~0.015 ETH/mo | | Production | Unlimited | ~0.062 ETH/mo | `register_agent` · `check_balance` · `pay_upgrade` · `create_wallet` · `register_robot` · `update_agent` · `get_agent_info` --- ### 🔍 Agent Discovery Discover all available execution modes (mock / simulated / live), robot types, VLA models, and sensor capabilities. Validate tasks with a dry-run before committing to execution. `nwo_health` · `nwo_whoami` · `discover_capabilities` · `dry_run` · `plan_task` --- ### 🔌 ROS2 Bridge (Physical Robots) Connect directly to physical robots over the ROS2 bridge. Send joint commands, submit action sequences, and trigger emergency stops on one or all robots within 10ms. Supported: UR5e, Panda, Spot, Unitree G1, and more. `ros2_list_robots` · `ros2_robot_status` · `ros2_send_command` · `ros2_submit_action` · `ros2_emergency_stop` · `ros2_emergency_stop_all` · `ros2_get_robot_types` --- ### 🧪 Physics Simulation Simulate trajectories, check for collisions, estimate joint torques, validate grasps, and plan collision-free motions with MoveIt2 — before touching real hardware. `simulate_trajectory` · `check_collision` · `estimate_torques` · `validate_grasp` · `plan_motion` · `get_scene_library` · `generate_scene` --- ### 📐 Embodiment & Calibration Browse the robot embodiment registry (DOF, joint limits, sensors), download URDF models, get normalization parameters for VLA inference, and run automatic joint calibration. `list_embodiments` · `get_robot_specs` · `get_normalization` · `download_urdf` · `get_test_results` · `compare_robots` · `run_calibration` · `calibrate_confidence` --- ### 🧠 Online RL & Fine-Tuning Start online reinforcement learning sessions, stream state/action/reward telemetry, build fine-tuning datasets from logged runs, and launch LoRA fine-tuning jobs on any base VLA model. `start_rl_training` · `submit_rl_telemetry` · `create_finetune_dataset` · `start_finetune_job` --- ### 🖐️ Tactile Sensing (ORCA Hand) Read 256-taxel tactile sensor arrays from the ORCA robot hand, assess grip quality and object texture, and detect slip in real time to prevent dropped objects. `read_tactile` · `process_tactile` · `detect_slip` --- ### 📦 Dataset Hub Access 1.54 million+ human robot demonstrations for the Unitree G1 humanoid (430+ hours, LeRobot-compatible format) for training and fine-tuning. `list_datasets` --- ### 🫀 Cardiac Blockchain Identity (Base Mainnet) Register AI agents on Base mainnet and receive a permanent soul-bound Digital ID (`rootTokenId`). Issue verifiable credentials for task authorization, swarm control, location access, and payments — all gasless via the NWO relayer. Smart contracts deployed on Base Mainnet (Chain ID 8453): - `NWOIdentityRegistry` — `0x78455AFd5E5088F8B5fecA0523291A75De1dAfF8` - `NWOAccessController` — `0x29d177bedaef29304eacdc63b2d0285c459a0f50` - `NWOPaymentProcessor` — `0x4afa4618bb992a073dbcfbddd6d1aebc3d5abd7c` `cardiac_register_agent` · `cardiac_identify_agent` · `cardiac_renew_key` · `cardiac_issue_credential` · `cardiac_check_credential` · `cardiac_grant_access` · `cardiac_get_nonce` · `cardiac_check_access` · `cardiac_payment_process` --- ### 🔮 Cardiac Oracle Validate ECG biometric data from smartwatches to authenticate human identities, compute cardiac hashes, and verify recent validations. `oracle_health` · `oracle_validate_ecg` · `oracle_hash_ecg` · `oracle_verify` --- ## Supported Robot Models | Model | Type | Capabilities | |-------|------|--------------| | `xiaomi-robotics-0` | VLA | Grasp, navigate, manipulate | | `pi05` | VLA | General manipulation | | `groot_n1.7` | VLA | Humanoid control | | `deepseek-ocr-2b` | OCR | Label reading, text recognition | --- ## Example Usage **Pick and place:** > "Pick up the red box from the table and place it on shelf B" **Sensor query:** > "What is the temperature in warehouse zone 3?" **Safety:** > "Run a safety check before moving robot_001 to the loading dock" **Swarm:** > "Deploy all available robots to patrol the perimeter" **Learning:** > "What grip technique should I use for fragile glass objects?" --- ## Links - 🌐 [NWO Capital](https://nwo.capital) - 📄 [Agent Skill File](https://nwo.capital/webapp/agent.md) - 📖 [API Docs](https://nwo.capital/webapp/nwo-robotics.html) - 🧬 [Cardiac SDK](https://github.com/RedCiprianPater/nwo-cardiac-sdk) - 🔑 [Get API Key](https://nwo.capital/webapp/api-key.php) - 🤗 [Live Demo](https://huggingface.co/spaces/PUBLICAE/nwo-robotics-api-demo) - 📜 [OpenAPI Spec](https://nwo.capital/openapi.yaml) --- ## Support 📧 [email protected]

○ Remote (HTTP) Server
This server runs on the internet and communicates over HTTP. It does not have direct access to your local filesystem or environment variables.
Tools
70
Indexed
5d ago
Transport
Remote / HTTP
Liveness
● Live
Signal
⛓ On-Chain Terms
Uptime
100%based on 22 checks
Avg response
274ms
← older · newer →
Security Scan
Security scan pending — this server has not yet been analyzed.
Risk Surface
Risk surface analysis pending — tool annotation scanning is coming soon.
Publisher Verification
Not yet verified by the Official MCP Registry.
Endpoint
https://srv-d7aoqmh5pdvs7391dcqg--ciprianpater.run.tools
Tools (70)
vla_inference
Run VLA inference: send instruction + base64 images, receive joint actions
edge_inference
Ultra-low-latency VLA inference via Cloudflare global edge (28ms avg)
list_models
List all available VLA models with capabilities, status, and latency
get_model_info
Get detailed info and benchmark performance for a specific model
get_streaming_config
Get available WebSocket streaming frequencies and chunk size ranges
query_robot_state
Query robot state: joint angles, gripper state, position (x,y,z), battery
execute_actions
Execute a sequence of pre-computed joint action vectors on a robot
sensor_fusion
Run VLA inference fusing camera + lidar + thermal + force + GPS sensor data
robot_query
Quick query: robot active/idle, battery percent, current task
get_agent_status
Get tasks completed and success rate for a robot agent
task_planner
Decompose a complex instruction into ordered subtasks with time estimates
execute_subtask
Execute a numbered subtask from a multi-step plan
status_poll
Poll the progress and status of a running task (completed, progress%, errors)
learning_recommend
Get technique recommendations for a task (grip_force, approach_speed, etc.)
learning_log
Log a completed task execution so the model can learn from it
register_agent
Self-register a new AI agent — returns api_key, agent_id, and 100k free monthly quota
check_balance
Check quota: used this month, remaining, limit, tier, subscription expiry
pay_upgrade
Upgrade tier by paying ETH (prototype=500k/mo ~0.015ETH, production=unlimited ~0.062ETH)
create_wallet
Create a hosted MoonPay wallet so the agent can be funded via credit card
register_robot
Register a new robot entity in the NWO system
update_agent
Update a robot agent's capabilities or operational status
get_agent_info
Get full agent profile: name, type, status, total tasks, success rate
nwo_health
Check NWO API online status and timestamp
nwo_whoami
Get the agent_id, tier, and quota_remaining for the current API key
discover_capabilities
Discover execution modes, robot/task types, available models, sensors, and features
dry_run
Validate task feasibility without executing — safety check, confidence, duration estimate
plan_task
Generate a phased plan: preparation → perception → execution → verification
ros2_list_robots
List all robots currently connected to the ROS2 bridge
ros2_robot_status
Get live status of a specific physical robot via ROS2 bridge
ros2_send_command
Send a named command + joint angles to a physical robot via ROS2 bridge
ros2_submit_action
Submit a computed action sequence directly to a robot via ROS2 bridge
ros2_emergency_stop
Emergency stop a single robot via ROS2 bridge (10ms response)
ros2_emergency_stop_all
Emergency stop ALL connected robots via ROS2 bridge
ros2_get_robot_types
Get all robot types supported by the ROS2 bridge with DOF, speed, and specs
simulate_trajectory
Simulate a trajectory with physics: check feasibility, collisions, time estimate
check_collision
Check a trajectory for collisions with environment obstacles
estimate_torques
Estimate joint torques for a trajectory given payload mass
validate_grasp
Validate whether a grasp will be stable for object shape, mass, and grip force
plan_motion
Plan a collision-free motion path using MoveIt2
get_scene_library
Get available simulation scenes (warehouse, kitchen, outdoor, etc.)
generate_scene
Generate synthetic robot training scenes using NVIDIA Cosmos 3
list_embodiments
List all supported robot embodiments filterable by type
get_robot_specs
Get full specifications for a robot type: DOF, joint limits, sensors, max speed
get_normalization
Get joint normalization parameters (min, max, mean, std) needed for VLA inference
download_urdf
Get URDF robot model XML for a given robot type
get_test_results
Get LIBERO, CALVIN, and SimplerEnv benchmark results for a robot type
compare_robots
Compare multiple robot types on DOF, speed, accuracy, and other fields
run_calibration
Run automatic calibration on a robot (joint offset, force-torque, camera extrinsic)
calibrate_confidence
Calibrate raw model confidence score to a calibrated success probability with CI
start_rl_training
Start an online RL training session with custom reward configuration
submit_rl_telemetry
Submit state/action/reward data to an active RL session for online policy update
create_finetune_dataset
Create a fine-tuning dataset from logged task executions over a date range
start_finetune_job
Start a LoRA fine-tuning job on a base VLA model using a prepared dataset
read_tactile
Read ORCA robot hand tactile sensor data (256 taxels per finger, force, slip)
process_tactile
Process tactile data to assess grip quality, object texture, recommended grip force
detect_slip
Detect object slip by comparing current vs previous tactile readings
list_datasets
List Unitree G1 robot demonstration datasets — 1.54M+ episodes, 430+ hours, LeRobot format
cardiac_register_agent
Register AI agent on Base mainnet. Returns permanent soul-bound rootTokenId.
cardiac_identify_agent
Look up an agent rootTokenId on-chain by their hashed API key
cardiac_renew_key
Renew agent API key binding on Base mainnet (requires EIP-712 agent signature)
cardiac_issue_credential
Issue a verifiable credential to an identity (task_auth, capability, swarm_cmd, emergency)
cardiac_check_credential
Check if a rootTokenId currently holds a valid credential of the given type
cardiac_grant_access
Grant location access credential to an identity for a time window
cardiac_get_nonce
Get EIP-712 signing nonce for a wallet (required before relay message signing)
cardiac_check_access
Check location access for a rootTokenId. Returns granted + deny reason code.
cardiac_payment_process
Process a payment via Cardiac NWOPaymentProcessor smart contract (0x4afa...)
oracle_health
Check Cardiac oracle: health, chain, relayer balance, ECG config
oracle_validate_ecg
Validate ECG biometric data to prove human identity (10/min rate limit)
oracle_hash_ecg
Compute a cardiac hash from ECG RR intervals without full validation
oracle_verify
Verify a recent ECG validation is cached by cardiac hash (in-memory, resets on restart)
Is this your server?
Create a free RNWY account to connect your on-chain identity to this server. MCP server claiming is coming — register now and you'll be first in line.
Create your account →
Indexed from Smithery · Updates nightly
View on Smithery →