icme-labs/preflightJailbreak-proof guardrails for AI agents. Policy enforcement powered by Automated Reasoning and formal verification — an SMT solver, not an LLM, decides whether an action passes or fails. Cannot be prompt-injected. Every decision produces a cryptographic ZK proof. Includes a FREE check_logic tool that catches contradictions in agent reasoning (budget overflows, impossible timelines, conflicting constraints) using a Z3 SAT solver. No account needed. 13 tools covering the full workflow: check_logic — FREE. Mathematically prove reasoning is consistent before acting on it. make_rules — write guardrails in plain English, ICME compiles them to formal logic via Automated Reasoning. check_action / quick_check — verify any agent action against your policy. SAT = allowed, UNSAT = blocked. verify_proof — independently verify the ZK receipt from any prior check. get_scenarios / run_tests — test your policy with AWS Automated Reasoning scenarios before deploying. Account & billing — create account ($5 USDC or card), top up credits, pay per call via x402. Docs: https://docs.icme.io
https://preflight--icme-labs.run.tools