Mozilla Ventures Introduction
Cryptographic proof that AI controls actually executed. Switzerland in the wire — neutral, platform-agnostic, zero data egress.
The Problem
Every AI governance platform can tell you what should happen. None can prove what actually happened.
Governance platforms define policies, risk frameworks, compliance requirements. They answer what should happen.
Runtime infrastructure enforces controls, runs guardrails, captures telemetry. It determines what actually happens.
No cryptographic bridge between them. Governance says "compliant." Runtime says "controls ran." But there's no independently verifiable proof connecting the two.
Market Timing
High-risk AI systems must demonstrate conformity with essential requirements. Documentation alone won't suffice.
First US state law requiring deployers to implement risk management with documented evidence.
Class actions against ambient scribes (Sharp HealthCare). Vendors can't prove controls executed.
The Pattern: Regulations are moving from "have policies" to "prove enforcement." Point-in-time audits become continuous attestation. Self-reported dashboards become third-party verifiable receipts.
The Solution
GLACIS sits in the inference path, executes your AI controls, and produces cryptographic receipts that third parties can verify — without ever accessing your data.
Run guardrails, PII scrubbing, consent checks, content filtering — as verifiable operations, not trust-me claims.
Produce cryptographic attestations bound to each inference. Third-party witness co-signs without seeing content.
Data never leaves customer boundary. Only commitments (hashes) export. Privacy-preserving by architecture.
Platform-agnostic attestation layer. Works with any AI stack, any governance tool, any cloud. Neutral infrastructure for trustworthy AI — aligned with Mozilla's vision of an open, interoperable internet.
Architecture
MIT-licensed. Runs in your VPC. Full visibility into what's being attested. No vendor lock-in. Mozilla appreciates this.
Third-party witnesses co-sign attestations. Creates receipts no single party can forge. Enables parametric insurance triggers.
Integration
GRC platforms define what trustworthy AI looks like. GLACIS proves that it happened. Together, they close the evidence loop.
The Integration: GRC exports policy requirements → GLACIS configures runtime controls → Attestation receipts flow back to dashboards. Customers get end-to-end proof, not just checkboxes.
Mozilla AI Opportunity
Mozilla AI's new LLM could ship with native GLACIS attestation — every inference automatically produces a verifiable receipt. A differentiator closed models can't match.
OpenAI and Anthropic can't credibly offer "trust-but-verify" — they are the party you'd verify against. Mozilla + GLACIS creates an open trust stack that closed providers cannot replicate.
Traction
All Inbound, Zero Marketing Spend. Design partners found us through LinkedIn content, HLTH networking, and word of mouth. The pull is real.
40k+
visits/day
Colorado-based. Facing Colorado AI Act deadline.
Consent attestation • PHI proof • Guardrail evidence
Tenant-bounded AI with trade secrets, pre-release drug data.
Query attestation • Tenant isolation • Trade secret fencing
Also in pipeline
Team
Co-Founder & CEO
SwiftKey → 1 in 4 smartphone users
Founding exec, $250M Microsoft exit. Chief Strategy at Vektor Medical—secured reimbursement for AI device. Cambridge Law.
Co-Founder & CMO
Cognoa → First FDA De Novo for AI diagnostics
Medical Director at Cognoa. Navigated FDA authorization for AI that diagnoses autism in children.
CTO
Microsoft Azure → $2B product line
Engineer turned product leader. Led Azure's enterprise platform business. Personal relationships with top 50 Azure customer CEOs.
Advisors
Paul Allen's AI institute
$250K infrastructure credits
Filed Nov 2025
The Ask
Mission Alignment
Trustworthy AI as infrastructure, not theater. Open standards over proprietary lock-in.
Mozilla AI Integration
Native attestation could differentiate Mozilla's LLM in the enterprise market.
Portfolio Synergies
Complements existing AI governance investments. Evidence layer + policy layer = complete stack.
Network Effects
Mozilla's convening power could accelerate adoption of open attestation standards.
Let's build the evidence layer for trustworthy AI — together.