AI2 Incubator

Mozilla Ventures Introduction

The Evidence Layer
for Trustworthy AI

Cryptographic proof that AI controls actually executed. Switzerland in the wire — neutral, platform-agnostic, zero data egress.

Open-Source Sidecar
Commercial Witness Network
Pango holding cryptographic proof receipt

The Problem

The AI evidence gap

Every AI governance platform can tell you what should happen. None can prove what actually happened.

GRC Layer (The "What")

Governance platforms define policies, risk frameworks, compliance requirements. They answer what should happen.

Ops Layer (The "How")

Runtime infrastructure enforces controls, runs guardrails, captures telemetry. It determines what actually happens.

The Chasm

No cryptographic bridge between them. Governance says "compliant." Runtime says "controls ran." But there's no independently verifiable proof connecting the two.

Market Timing

Regulatory forcing functions are here

AUG 2025

EU AI Act

High-risk AI systems must demonstrate conformity with essential requirements. Documentation alone won't suffice.

FEB 2026

Colorado AI Act

First US state law requiring deployers to implement risk management with documented evidence.

ONGOING

Healthcare AI Litigation

Class actions against ambient scribes (Sharp HealthCare). Vendors can't prove controls executed.

The Pattern: Regulations are moving from "have policies" to "prove enforcement." Point-in-time audits become continuous attestation. Self-reported dashboards become third-party verifiable receipts.

The Solution

Runtime attestation, not more dashboards

GLACIS sits in the inference path, executes your AI controls, and produces cryptographic receipts that third parties can verify — without ever accessing your data.

Execute Controls

Run guardrails, PII scrubbing, consent checks, content filtering — as verifiable operations, not trust-me claims.

Generate Proof

Produce cryptographic attestations bound to each inference. Third-party witness co-signs without seeing content.

Zero Egress

Data never leaves customer boundary. Only commitments (hashes) export. Privacy-preserving by architecture.

Switzerland in the Wire

Platform-agnostic attestation layer. Works with any AI stack, any governance tool, any cloud. Neutral infrastructure for trustworthy AI — aligned with Mozilla's vision of an open, interoperable internet.

Architecture

Edge-native, cryptographic, zero-egress

Your App
AI Request
GLACIS Sidecar
Execute + Attest
LLM Provider
Inference
Witness Network
Co-sign Receipts
Transparency Log
Immutable Record

Open-Source Sidecar

MIT-licensed. Runs in your VPC. Full visibility into what's being attested. No vendor lock-in. Mozilla appreciates this.

Commercial Witness Network

Third-party witnesses co-sign attestations. Creates receipts no single party can forge. Enables parametric insurance triggers.

Integration

Completing the AI governance stack

GRC platforms define what trustworthy AI looks like. GLACIS proves that it happened. Together, they close the evidence loop.

GRC LAYER
Define Trust Requirements
GLACIS
Attest at Runtime
EVIDENCE
Verifiable Proof

GRC Platforms (Credo AI, etc.)

  • AI governance policy definition
  • Risk assessment frameworks
  • Compliance requirement mapping
  • Audit-ready documentation

GLACIS (Evidence Layer)

  • Runtime control execution
  • Cryptographic attestation generation
  • Third-party witnessed receipts
  • Evidence that policies actually ran

The Integration: GRC exports policy requirements → GLACIS configures runtime controls → Attestation receipts flow back to dashboards. Customers get end-to-end proof, not just checkboxes.

Mozilla AI Opportunity

Native attestation as competitive moat

The Idea

Mozilla AI's new LLM could ship with native GLACIS attestation — every inference automatically produces a verifiable receipt. A differentiator closed models can't match.

For Mozilla AI

  • Trustworthy AI as default, not add-on
  • Enterprise-ready from day one
  • Regulatory compliance built in
  • Aligns with Mozilla's mission

For Enterprise Buyers

  • Open-source model with proof layer
  • No vendor lock-in concerns
  • Audit trail without sending data out
  • Defense against AI liability claims

Mozilla's Unfair Advantage

OpenAI and Anthropic can't credibly offer "trust-but-verify" — they are the party you'd verify against. Mozilla + GLACIS creates an open trust stack that closed providers cannot replicate.

Traction

Strong pull. Just getting started.

All Inbound, Zero Marketing Spend. Design partners found us through LinkedIn content, HLTH networking, and word of mouth. The pull is real.

Design Partner
nVoq Ambient AI for home care

40k+

visits/day

Colorado-based. Facing Colorado AI Act deadline.
Consent attestation • PHI proof • Guardrail evidence

Design Partner
Praxis Pro Pharma sales training AI

Tenant-bounded AI with trade secrets, pre-release drug data.
Query attestation • Tenant isolation • Trade secret fencing

Also in pipeline

Prompt Opinion
Mayo Platform
deepc
4
Design Partners
35+
Patent Claims
3+
Health Systems

Team

FDA Authorized. Enterprise Deployed. We've lived this problem.

Joe Braidwood

Joe Braidwood

Co-Founder & CEO

SwiftKey → 1 in 4 smartphone users

Founding exec, $250M Microsoft exit. Chief Strategy at Vektor Medical—secured reimbursement for AI device. Cambridge Law.

Dr. Jennifer Shannon

Dr. Jennifer Shannon

Co-Founder & CMO

Cognoa → First FDA De Novo for AI diagnostics

Medical Director at Cognoa. Navigated FDA authorization for AI that diagnoses autism in children.

Rohit Tatachar

Rohit Tatachar

CTO

Microsoft Azure → $2B product line

Engineer turned product leader. Led Azure's enterprise platform business. Personal relationships with top 50 Azure customer CEOs.

Advisors

Selvan Senthivel GE Healthcare Chief Technologist
Nakis Urfi, JD, MPH Cantex CCO
Dávid Márton Harvard AI Research

AI2 Incubator

Paul Allen's AI institute

Cloudflare Launchpad

$250K infrastructure credits

35+ Patent Claims

Filed Nov 2025

The Ask

$2M Pre-Seed

18-Month Milestones

  • 10 paying customers
  • $500K–$1M ARR
  • Evidence format accepted by 2–3 health systems
  • GLACIS Attestation Profile 1.0 (open spec)
  • Mozilla AI integration pilot

Use of Funds

  • 60% Engineering: Evidence Pack, verifier UX, integrations
  • 30% GTM: Convert design partners, health system relationships
  • 10% Operations: SOC 2, legal, infrastructure

Why Mozilla Ventures

Mission Alignment

Trustworthy AI as infrastructure, not theater. Open standards over proprietary lock-in.

Mozilla AI Integration

Native attestation could differentiate Mozilla's LLM in the enterprise market.

Portfolio Synergies

Complements existing AI governance investments. Evidence layer + policy layer = complete stack.

Network Effects

Mozilla's convening power could accelerate adoption of open attestation standards.

Let's build the evidence layer for trustworthy AI — together.