Mozilla Ventures Introduction
Cryptographic proof that AI controls actually executed. Switzerland in the wire — neutral, platform-agnostic, zero data egress.
The Problem
"There's a massive gap between AI ops and GRC, and nobody wanted to bridge that chasm."
— Navrina Singh, CEO Credo AI
Governance platforms define policies, risk frameworks, compliance requirements. They answer what should happen.
Runtime infrastructure enforces controls, runs guardrails, captures telemetry. It determines what actually happens.
No cryptographic bridge between them. Governance says "compliant." Runtime says "controls ran." But there's no independently verifiable proof connecting the two.
Market Timing
High-risk AI systems must demonstrate conformity with essential requirements. Documentation alone won't suffice.
First US state law requiring deployers to implement risk management with documented evidence.
Class actions against ambient scribes (Sharp HealthCare). Vendors can't prove controls executed.
The Pattern: Regulations are moving from "have policies" to "prove enforcement." Point-in-time audits become continuous attestation. Self-reported dashboards become third-party verifiable receipts.
The Solution
GLACIS sits in the inference path, executes your AI controls, and produces cryptographic receipts that third parties can verify — without ever accessing your data.
Run guardrails, PII scrubbing, consent checks, content filtering — as verifiable operations, not trust-me claims.
Produce cryptographic attestations bound to each inference. Third-party witness co-signs without seeing content.
Data never leaves customer boundary. Only commitments (hashes) export. Privacy-preserving by architecture.
Platform-agnostic attestation layer. Works with any AI stack, any governance tool, any cloud. Neutral infrastructure for trustworthy AI — aligned with Mozilla's vision of an open, interoperable internet.
Architecture
MIT-licensed. Runs in your VPC. Full visibility into what's being attested. No vendor lock-in. Mozilla appreciates this.
Third-party witnesses co-sign attestations. Creates receipts no single party can forge. Enables parametric insurance triggers.
Partnership
Credo defines what trustworthy AI looks like. GLACIS proves that it happened.
The Integration: Credo exports policy requirements → GLACIS configures runtime controls → Attestation receipts flow back to Credo dashboards. Customers get end-to-end proof, not just checkboxes.
Mozilla AI Opportunity
Mozilla AI's new LLM could ship with native GLACIS attestation — every inference automatically produces a verifiable receipt. A differentiator closed models can't match.
OpenAI and Anthropic can't credibly offer "trust-but-verify" — they are the party you'd verify against. Mozilla + GLACIS creates an open trust stack that closed providers cannot replicate.
Traction
All Inbound, Zero Marketing Spend. Design partners found us through LinkedIn content, HLTH networking, and word of mouth. The pull is real.
Team
CEO & Co-Founder
Ex-Stripe (Risk Infra), Ex-AWS (Security). Built fraud systems processing $100B+. Deep platform infrastructure experience.
CMO & Co-Founder
Practicing physician. Clinical AI researcher. Understands healthcare compliance from the inside. Author of AI governance frameworks.
Paul Allen's AI institute. Selected from 1,200+ applicants.
$250K credits + infrastructure partnership for edge deployment.
IP Portfolio: 4 patent families, 70+ claims filed with Fenwick & West (Nov 2025). Covers non-egress attestation, self-stabilizing control, insurance risk pricing, and statistical sampling protocols.
The Ask
Mission Alignment
Trustworthy AI as infrastructure, not theater. Open standards over proprietary lock-in.
Mozilla AI Integration
Native attestation could differentiate Mozilla's LLM in the enterprise market.
Credo Partnership
Navrina's introduction validates the "ops layer" thesis. Integration creates mutual value.
Network Effects
Mozilla's convening power could accelerate adoption of open attestation standards.
Let's build the evidence layer for trustworthy AI — together.