Project 01
Enterprise Secure AI Agent Governance Platform — a deterministic architecture where AI agents execute only what the cryptographic trust chain permits.
The Problem
Current AI safety relies on system prompts and content filters — probabilistic guardrails that can be bypassed with creative prompt engineering.
As enterprises deploy autonomous AI agents that make procurement decisions, modify employee records, and execute financial transactions, the gap between “the model usually follows instructions” and “the system physically prevents unauthorized actions” becomes a critical liability.
The industry lacks governance that enforces constraints at the infrastructure level rather than the conversational level.
The Solution
PR1M3Claw implements a 5-layer Guardian Architecture that enforces safety through physics, not persuasion. Each layer operates independently — compromising one does not compromise the chain.
At the core is Ternary Moral Logic (TML) — a deterministic state machine where every agent action resolves to one of four states. There is no “probably safe.”
Multi-jurisdiction compliance (GDPR, CCPA, PIPEDA, Quebec Law 25) is encoded as TypeScript interfaces and Zod schemas — a missing lawful basis field is a compile-time error, not a policy violation discovered in an audit.
Architecture
Each layer enforces safety independently. An agent action must pass through all five layers before execution is permitted.
Qubes OS separates the AI's reasoning environment from its execution environment across hardware-enforced VM boundaries. Compromise of one compartment cannot reach the other.
Biscuit tokens with Ed25519 signatures and Datalog attenuation rules. Sub-agents are mathematically provable subsets of their parent's authority — privilege escalation is impossible by construction.
Each agent action runs in a Wasmtime sandbox with deny-all defaults. Only the exact capabilities required for that specific intent hash are provisioned — nothing more.
A Rust hyper proxy that canonicalizes Unicode (NFKC), scans for canary token leaks, sanitizes content, and runs harm classification through Llama-Guard-3. Blocks threats before they reach the agent.
Every agent action is evaluated against Ternary Moral Logic rules and co-signed via gRPC before execution. Circuit breakers with health-score-based anomaly detection protect the chain.
Execute or Block
Decision Model
Every agent action resolves to exactly one of four deterministic states. There is no ambiguity.
+1
Permit
Action passes all checks and is authorized to execute
0
Sacred Pause
Insufficient context — action is held for human review
−1
Prohibit
Action violates policy constraints and is blocked
−2
Self-Sacrifice
System compromise detected — keys are zeroized, storage wiped, process killed
10
Packages
TypeScript + Rust
9/12
Milestones
Completed
5
Jurisdictions
Compliance frameworks
5
Safety Layers
Independent enforcement
Technology
Every technology choice serves the governance mission — Rust for performance-critical security paths, WASM for deterministic sandboxing, cryptographic tokens for unforgeable authorization.
When an agent issues a tool call, the payload passes through the Semantic Firewall for content sanitization, then the Constitutional Supervisor evaluates intent against TML rules and co-signs with Ed25519.
Only then does the WASM Cage provision a capability sandbox with the exact permissions required for that specific intent hash.
Stack
TypeScript
Core logic, governance rules, dashboard
Rust
Semantic firewall, performance-critical paths
WASM / Wasmtime
Agent capability sandboxing
gRPC
Constitutional Supervisor communication
Biscuit Tokens
Cryptographic authorization chain
Ed25519
Digital signatures and key management
Next.js 15
Governance dashboard UI
Turborepo
Monorepo CI/CD pipeline
“Morality is not a speech. It is a switch statement.”