AI2 Technical Review • December 2025

GLACIS

Strategic Architecture Discussion

One sentence:

Cryptographic receipts for AI execution that work everywhere AI runs.

Presented by
Joe Braidwood, CEO
For
Vu Ha, AI2
The Vision

Certificate Transparency for AI

AI is entering regulated industries. Regulators will demand proof of what happened.

Logs can be falsified.

Attestations can't — if they're witnessed by an independent party and anchored in a transparency log.

THE PATTERN
Certificate Transparency
Solved trust for TLS certificates
GLACIS
Solves trust for AI inference
Architecture

Why Rust + TypeScript

Rust (Caer Library)

  • Verification is the trust anchor
  • No runtime exceptions, no type coercion bugs
  • WASM compilation: identical logic in CLI, browser, edge
  • Small, auditable, correct — gets security reviewed

TypeScript

  • Receipt generation, API surfaces, SDK, dashboard
  • Cloudflare Workers native, edge-first
  • AI-assisted development compounds velocity
  • Velocity matters here — Rust is for correctness

The Boundary

TypeScript generates receipts and stores them. Rust verifies them. That's the entire interface. Both sides speak the Receipt JSON schema.

Question for Vu

Where do you see the Rust surface area growing? Should policy evaluation move into Rust for auditability?

Integration

The Sidecar Model

The unlock: GLACIS doesn't require customers to change their AI integration. The sidecar proxies existing API calls, attests them, and passes through the response.

[App] → [GLACIS Sidecar] → [OpenAI/Anthropic] → [GLACIS Sidecar] → [App]
                ↓
        [Transparency Log]

Why This Matters

  • → Zero code change for AI vendors
  • → Works with any provider (Anthropic, OpenAI, Azure, Bedrock, local)

Deployment Options

  • • Cloudflare Worker (AI gateway)
  • • Docker container (on-prem, air-gapped)
  • • Lambda layer (AWS native)
  • • Kubernetes sidecar (enterprise)

Question for Vu

What deployment model do you see AI vendors actually using? Are they Cloudflare-native, or do we need the container story immediately?

Evolution

Where ML Fits

Today

GLACIS attests deterministic facts:

  • → Input hash
  • → Output hash
  • → Policy applied
  • → Timestamp

Tomorrow

What if attestation includes why the AI behaved a certain way?

  • → Policy scoring at inference (safety, toxicity, PII)
  • → Embedding-based classification (without revealing content)
  • → Anomaly detection on patterns
  • → Model fingerprinting (which version produced output)

Strategic Question

Is GLACIS logging infrastructure, or does it become an analysis layer?

Question for Vu

You've thought deeply about knowledge systems. Where does ML fit in compliance infrastructure — in the attestation itself, or as a separate service consuming the log?

Discussion

What I Want From This Conversation

1. Sidecar Pattern-Match

Does "zero integration change" actually matter to AI vendors, or do they want deeper integration?

2. ML Layer Timing

Logging vs. analysis — and when that decision matters.

3. Cloudflare Bet

Right to go deep, or should we be more infrastructure-agnostic earlier?

4. What's Missing

What would you need to see before pointing an AI2 portfolio company at us?