All Projects

Project 01

AIGISTv0.20.5

Morality is not a speech. It is a switch statement.

Active Development2025–2026
View Live Dashboard

The Problem

The Illusion of Prompt Safety

AI agents are gaining unprecedented autonomy, integrating directly into enterprise databases, payment gateways, and core infrastructure. Yet, the safety mechanisms guarding these systems are still primarily just system prompts.

A system prompt is essentially a speech explaining to the AI why doing bad things is wrong. In an enterprise environment facing adversarial prompt injection and zero-day vulnerabilities, a speech is unacceptable. A speech can be argued with.

Physics cannot be argued with.

The Solution

Typed Policy + Sandbox Toolkit

AIGIST provides an enterprise-secure agent toolkit that compiles morality and policy directly into WASM sandboxes, Zod schemas, and Biscuit Datalog rules.

It is a deterministic governance architecture where an LLM is allowed to execute only what the cryptographic trust chain explicitly permits, only within a capability-scoped runtime, and only after a Constitutional Supervisor co-signs the intent.

If the LLM generates a malicious payload, the execution fails at the cryptographic or hardware layer — completely independent of the model's reasoning capabilities.


Architecture

The 5-Layer Guardian Stack

01

Identity

Biscuit tokens with Ed25519 signatures and Datalog attenuation rules. Sub-agents are mathematically provable subsets of their parent's authority.

02

Cage

Each agent action runs in a Wasmtime sandbox with deny-all defaults. Only the exact capabilities required for the intent hash are provisioned.

03

Semantic Firewall

A Rust hyper proxy that canonicalizes Unicode, scans for canary tokens, sanitizes content, and evaluates semantic risk.

04

Supervisor

Every agent action is evaluated against Ternary Moral Logic rules and co-signed via gRPC before execution.

05

Circuit Breaker

Health-score-based anomaly detection protecting the chain. Drop below threshold and the system terminates the process.

Logic Engine

Ternary Moral Logic

Traditional authorization systems are binary: allow or deny. AI operations often encounter edge cases that require human context. AIGIST introduces a four-state outcome model.

Every proposed tool call is evaluated by the Constitutional Supervisor against the organization's policies, resulting in one of these deterministic states.

+1

Permit

Action passes all cryptographic and semantic checks. Authorized to execute.

0

Sacred Pause

Insufficient context or ambiguous intent. Action is held for human review.

−1

Prohibit

Action violates policy constraints. Hard blocked.

−2

Terminate

Action is flagged and violates compliance posture while in process. Process is terminated.

Status

Development Trajectory

Milestones13 / 17 Complete

Completed

  • Identity Package (Biscuit Ed25519)
  • WASM Sandbox Runtime
  • TML Engine Implementation
  • Compliance Typings
  • Agent Runtime Context
  • Core Supervisor Logic
  • Next.js 15 Governance Dashboard
  • Live environment on aigist.io

Pending

  • FIDO2 Hardware Key Integration
  • OpenTelemetry (OTel) Span Trees
  • GoC Sovereign AI Compliance Audits

Tech Stack

Engineering

TypeScriptSDK, CLI, & Dashboard (11 packages)
RustWasmtime runtime & crypto signing (4 crates)
WasmtimeCapability-based execution sandbox
Biscuit TokensCryptographic identity & Datalog policies
gRPCLow-latency agent-supervisor communication

Canadian Sovereign AI

AIGIST is architected with strict alignment to the Government of Canada's AI policies, including AIDA (Artificial Intelligence and Data Act), PIPEDA, and the Directive on Automated Decision-Making (DADM). The platform supports on-premise, air-gapped deployments for complete data sovereignty.

Live Demonstration

AIGIST Governance Center

The AIGIST dashboard is currently live. It visualizes the TML decision engine, compliance statuses, and system health of the agent runtime.

aigist.io