Project 01
Morality is not a speech. It is a switch statement.

The Problem
AI agents are gaining unprecedented autonomy, integrating directly into enterprise databases, payment gateways, and core infrastructure. Yet, the safety mechanisms guarding these systems are still primarily just system prompts.
A system prompt is essentially a speech explaining to the AI why doing bad things is wrong. In an enterprise environment facing adversarial prompt injection and zero-day vulnerabilities, a speech is unacceptable. A speech can be argued with.
Physics cannot be argued with.
The Solution
AIGIST provides an enterprise-secure agent toolkit that compiles morality and policy directly into WASM sandboxes, Zod schemas, and Biscuit Datalog rules.
It is a deterministic governance architecture where an LLM is allowed to execute only what the cryptographic trust chain explicitly permits, only within a capability-scoped runtime, and only after a Constitutional Supervisor co-signs the intent.
If the LLM generates a malicious payload, the execution fails at the cryptographic or hardware layer — completely independent of the model's reasoning capabilities.
Architecture
Biscuit tokens with Ed25519 signatures and Datalog attenuation rules. Sub-agents are mathematically provable subsets of their parent's authority.
Each agent action runs in a Wasmtime sandbox with deny-all defaults. Only the exact capabilities required for the intent hash are provisioned.
A Rust hyper proxy that canonicalizes Unicode, scans for canary tokens, sanitizes content, and evaluates semantic risk.
Every agent action is evaluated against Ternary Moral Logic rules and co-signed via gRPC before execution.
Health-score-based anomaly detection protecting the chain. Drop below threshold and the system terminates the process.
Logic Engine
Traditional authorization systems are binary: allow or deny. AI operations often encounter edge cases that require human context. AIGIST introduces a four-state outcome model.
Every proposed tool call is evaluated by the Constitutional Supervisor against the organization's policies, resulting in one of these deterministic states.
Action passes all cryptographic and semantic checks. Authorized to execute.
Insufficient context or ambiguous intent. Action is held for human review.
Action violates policy constraints. Hard blocked.
Action is flagged and violates compliance posture while in process. Process is terminated.
Status
Tech Stack
AIGIST is architected with strict alignment to the Government of Canada's AI policies, including AIDA (Artificial Intelligence and Data Act), PIPEDA, and the Directive on Automated Decision-Making (DADM). The platform supports on-premise, air-gapped deployments for complete data sovereignty.
Live Demonstration
The AIGIST dashboard is currently live. It visualizes the TML decision engine, compliance statuses, and system health of the agent runtime.