Via Navrina Singh | AI2 Incubator

Mozilla Ventures Introduction

The Evidence Layer
for Trustworthy AI

Cryptographic proof that AI controls actually executed. Switzerland in the wire — neutral, platform-agnostic, zero data egress.

Open-Source Sidecar
Commercial Witness Network
Pango holding cryptographic proof receipt

The Problem

The evidence gap Navrina identified

"There's a massive gap between AI ops and GRC, and nobody wanted to bridge that chasm."

— Navrina Singh, CEO Credo AI

GRC Layer (The "What")

Governance platforms define policies, risk frameworks, compliance requirements. They answer what should happen.

Ops Layer (The "How")

Runtime infrastructure enforces controls, runs guardrails, captures telemetry. It determines what actually happens.

⚠️

The Chasm

No cryptographic bridge between them. Governance says "compliant." Runtime says "controls ran." But there's no independently verifiable proof connecting the two.

Market Timing

Regulatory forcing functions are here

AUG 2025

EU AI Act

High-risk AI systems must demonstrate conformity with essential requirements. Documentation alone won't suffice.

FEB 2026

Colorado AI Act

First US state law requiring deployers to implement risk management with documented evidence.

ONGOING

Healthcare AI Litigation

Class actions against ambient scribes (Sharp HealthCare). Vendors can't prove controls executed.

The Pattern: Regulations are moving from "have policies" to "prove enforcement." Point-in-time audits become continuous attestation. Self-reported dashboards become third-party verifiable receipts.

The Solution

Runtime attestation, not more dashboards

GLACIS sits in the inference path, executes your AI controls, and produces cryptographic receipts that third parties can verify — without ever accessing your data.

Execute Controls

Run guardrails, PII scrubbing, consent checks, content filtering — as verifiable operations, not trust-me claims.

Generate Proof

Produce cryptographic attestations bound to each inference. Third-party witness co-signs without seeing content.

Zero Egress

Data never leaves customer boundary. Only commitments (hashes) export. Privacy-preserving by architecture.

🧭

Switzerland in the Wire

Platform-agnostic attestation layer. Works with any AI stack, any governance tool, any cloud. Neutral infrastructure for trustworthy AI — aligned with Mozilla's vision of an open, interoperable internet.

Architecture

Edge-native, cryptographic, zero-egress

💻
Your App
AI Request
🛡
GLACIS Sidecar
Execute + Attest
🤖
LLM Provider
Inference
👁
Witness Network
Co-sign Receipts
📜
Transparency Log
Immutable Record

Open-Source Sidecar

MIT-licensed. Runs in your VPC. Full visibility into what's being attested. No vendor lock-in. Mozilla appreciates this.

Commercial Witness Network

Third-party witnesses co-sign attestations. Creates receipts no single party can forge. Enables parametric insurance triggers.

Partnership

Credo + GLACIS: Complement, not compete

Credo defines what trustworthy AI looks like. GLACIS proves that it happened.

CREDO AI
Define Trust Requirements
GLACIS
Attest at Runtime
COMPLIANCE
Evidence Dashboard

Credo's Role

  • AI governance policy definition
  • Risk assessment frameworks
  • Compliance requirement mapping
  • Audit-ready documentation

GLACIS's Role

  • Runtime control execution
  • Cryptographic attestation generation
  • Third-party witnessed receipts
  • Evidence that policies actually ran

The Integration: Credo exports policy requirements → GLACIS configures runtime controls → Attestation receipts flow back to Credo dashboards. Customers get end-to-end proof, not just checkboxes.

Mozilla AI Opportunity

Native attestation as competitive moat

The Idea

Mozilla AI's new LLM could ship with native GLACIS attestation — every inference automatically produces a verifiable receipt. A differentiator closed models can't match.

For Mozilla AI

  • Trustworthy AI as default, not add-on
  • Enterprise-ready from day one
  • Regulatory compliance built in
  • Aligns with Mozilla's mission

For Enterprise Buyers

  • Open-source model with proof layer
  • No vendor lock-in concerns
  • Audit trail without sending data out
  • Defense against AI liability claims
🔥

Mozilla's Unfair Advantage

OpenAI and Anthropic can't credibly offer "trust-but-verify" — they are the party you'd verify against. Mozilla + GLACIS creates an open trust stack that closed providers cannot replicate.

Traction

Design partners & validation

All Inbound, Zero Marketing Spend. Design partners found us through LinkedIn content, HLTH networking, and word of mouth. The pull is real.

4
Design Partners
70+
Patent Claims Filed
3+
Health System Convos
1
GRC Integration

Design Partners (Committed)

PraxisPro LOI Signed
nVoq Pilot → $25K ARR
Prompt Opinion Tech Kickoff Jan
deepc Committed

Active Conversations

Mayo Clinic — Platform CIO
Kaiser Permanente — AI Governance
MultiCare — Health System

Team

Built for this moment

Joe Braidwood

Joe Braidwood

CEO & Co-Founder

Ex-Stripe (Risk Infra), Ex-AWS (Security). Built fraud systems processing $100B+. Deep platform infrastructure experience.

Dr. Jennifer Shannon

Dr. Jennifer Shannon

CMO & Co-Founder

Practicing physician. Clinical AI researcher. Understands healthcare compliance from the inside. Author of AI governance frameworks.

AI2 Incubator

Paul Allen's AI institute. Selected from 1,200+ applicants.

Cloudflare Launchpad

$250K credits + infrastructure partnership for edge deployment.

IP Portfolio: 4 patent families, 70+ claims filed with Fenwick & West (Nov 2025). Covers non-egress attestation, self-stabilizing control, insurance risk pricing, and statistical sampling protocols.

The Ask

$2M Pre-Seed

18-Month Milestones

  • 10 paying customers
  • $500K–$1M ARR
  • Evidence format accepted by 2–3 health systems
  • GLACIS Attestation Profile 1.0 (open spec)
  • Mozilla AI integration pilot

Use of Funds

  • 60% Engineering: Evidence Pack, verifier UX, integrations
  • 30% GTM: Convert design partners, health system relationships
  • 10% Operations: SOC 2, legal, infrastructure

Why Mozilla Ventures

Mission Alignment

Trustworthy AI as infrastructure, not theater. Open standards over proprietary lock-in.

Mozilla AI Integration

Native attestation could differentiate Mozilla's LLM in the enterprise market.

Credo Partnership

Navrina's introduction validates the "ops layer" thesis. Integration creates mutual value.

Network Effects

Mozilla's convening power could accelerate adoption of open attestation standards.

Let's build the evidence layer for trustworthy AI — together.