← glacis.io
Confidential January 2026 Diligence Q&A

SemperVirens Diligence
Responses

Prepared for: Colin Tobias, Amanda Paolino | SemperVirens

Prepared by: Joe Braidwood, CEO

Q1 Core Problem Q2 First Use Case Q3 Market Size Q4 Champions Q5 Proof Consumers Q6 Integration Depth Q7 Open Standards Q8 Feature Risk Q9 Competitive Map Q10 Top Threat

Executive Context

These responses incorporate learnings from JPM Healthcare Conference (~20 validated customer conversations), active design partner engagements, competitive intelligence from our incoming CPO (ex-Azure Principal PM), and direct partnership discussions with Credo AI leadership.

Q1 Core Problem

What exact question does GLACIS answer that no existing tool answers well today? Is it solely the attestation problem?

The question we answer:

"Can you prove your AI controls actually ran—not that they exist, but that they executed?"

This isn't just an "attestation problem"—it's the gap between governance theater and governance reality.

Category Players What They Do What They Can't Do
Guardrails NeMo, Guardrails AI Execute controls Prove they executed
GRC Platforms Credo AI, Vanta Document policies Prove they ran
Observability Datadog, Arthur AI Record what happened Prevent modification

GLACIS Uniquely Provides

Cryptographic receipts that are:

  1. Tamper-evident — Any modification is mathematically detectable
  2. Third-party verifiable — Auditors verify without trusting GLACIS
  3. Zero-egress — Sensitive data never leaves customer infrastructure

The insurance extension: Attestation receipts become parametric triggers for AI liability coverage. Your LP bench (UPMC, Cigna, Guardian, MetLife) would be the first carriers to price risk based on verifiable AI governance, not self-reported compliance.

Q2 First Use Case

What is the first healthcare use case where GLACIS is clearly non-optional?

Platform operators deploying multiple third-party AI models.

The clearest example is DeepC, who we met at JPM. They operate an AI model marketplace for hospitals—70+ vendor models across radiology, diagnostics, and administrative functions.

"Zero post-deployment visibility into vendor AI behavior."
— DeepC CEO, JPM Conference

Driver Impact
EU AI Act Creates platform liability for AI they don't control
Hospital customers Demand transparency they can't deliver
Auditors Ask for proof that vendor self-reporting can't satisfy
Single incident Exposes platform to liability across all customers

The broader pattern: Any healthcare organization deploying AI from multiple vendors faces the same governance gap. Platform operators feel this most acutely because liability concentrates there.

Q3 Market Size

Do you believe the digital health AI vendor market alone could be a standalone business?

The digital health AI vendor market alone could reach $50-100M ARR at maturity, but we see it as the wedge, not the ceiling.

Why Vendors Are a Good Start

  • Clear pain (blocked by hospital security reviews)
  • Near-term revenue ($25K-$100K ACV range)
  • Validates technology with real PHI handling

Why It's Not the Final Market

  • 100 vendors × $50K = $5M ARR
  • 10 platforms × $500K = $5M ARR (better unit economics)
  • Insurance carriers = $100M+ ARR potential

Our market progression: Vendors → Platforms → Insurance Carriers. Each stage expands TAM by approximately 10x.

Q4 Champions & Objections

Who inside a health system champions GLACIS? What objections have been hardest to overcome?

Champions

Role Why They Champion
CMIO Owns clinical AI safety, accountable for patient outcomes
VP of AI/ML Blocked by governance, wants to ship faster
Chief Compliance Officer Facing board AI questions without data to answer

Hardest Objection

"Do you have HITRUST?"

Mayo Clinic wants to pilot but is blocked on this certification. It's a $50-150K investment with a 6-12 month timeline. We're planning certification post-CPO signing, which unlocks the budget.

Tactical workaround: Emphasize zero-egress architecture—GLACIS never sees PHI, only SHA-256 hashes. Some health systems accept this as a lower-risk pilot path.

Q5 Proof Consumers

Where does the proof get consumed? Who is the primary "user"?

Consumer Use Case Timing Frequency
Internal compliance Continuous monitoring, board reporting Ongoing Daily
Customer security review Vendor due diligence for enterprise sales Deal-driven Per deal
External auditors HIPAA, SOC 2, ISO 42001 evidence Annual Periodic
Litigators Discovery, duty of care defense Event-driven Rare but high-stakes

Near-Term Driver

Customer security review. Our design partners are blocked by their customers' security teams asking for governance proof.

Long-Term High-Value

Insurance carriers. Attestation receipts become parametric triggers for coverage—directly relevant to your LP bench.

Q6 Integration Depth

Is value realized with "wrap OpenAI calls" only, or do they need deep hooks?

Both, with a clear progression:

Phase 1: Wrap API Calls (Now)

pip install glacis  # 3 lines of code
  • Wraps OpenAI, Anthropic, Gemini calls
  • Generates attestation receipts for inference
  • Value: Prove guardrails ran without code changes

Phase 2: App-Level Controls (Q2 2026)

  • PHI detection and redaction in the hot path
  • Consent workflow attestation
  • Value: Prove controls executed, not just configured

Phase 3: Agentic Enforcement (Q3+ 2026)

  • SOP enforcement in agentic workflows
  • Multi-step workflow attestation
  • Value: Governance for AI that acts autonomously

Q7 Open Standards

If AI attestation receipts become an open standard, does GLACIS become commodity or win as the network anchor?

Our thesis: We win as the network/trust anchor.

The analogy: Certificate Transparency (RFC 6962) for HTTPS.

Element HTTPS World GLACIS World
Spec Open (RFC 6962) Open (GLACIS Attestation Profile 1.0)
Log operator Google (canonical) GLACIS (canonical)
Network effect Browsers trust Google's log Verifiers trust GLACIS log

What Stays Proprietary

Open (Ecosystem Adoption)

  • Receipt format specification

Proprietary (Network Effects)

  • Canonical log operation
  • Enforcement controls
  • Insurance signal derivation

Q8 Feature Risk

How do you avoid becoming a feature of other platforms?

Three Structural Defenses

1. Patent Portfolio

  • 35+ claims filed (November 2025, Fenwick & West)
  • Co-epoch binding for cryptographic attestation
  • Statistical Safety Signal Protocol (S3P)
  • Insurance parametric triggers from attestation data

2. Transparency Log Network Effects

  • More publishers = more valuable aggregate signal
  • Cross-customer threat intelligence
  • Insurance carriers value network data, not single-customer receipts

3. Position in Value Chain

Credo AI does "what policies should exist" (governance definition).
We do "prove policies executed" (runtime attestation).

The dynamic: We don't want to be a feature of Credo AI. We want to be the attestation layer that Credo AI requires to make their governance claims credible.

Q9 Competitive Landscape

Where do you fit in the competitive landscape?

We occupy a new layer that doesn't exist in current competitive maps:

AI Governance Stack
Policy Layer
Credo AI, ServiceNow, Archer
"What should happen" — Define policies, risk frameworks
Safety Layer
CalypsoAI, NeMo Guardrails
"Prevent bad things" — Model testing, prompt filtering
Observability
Datadog, Arthur AI, Arize
"Record what happened" — Monitoring, drift detection
Evidence Layer
GLACIS UNIQUE
"Prove what happened" — Cryptographic attestation
Infrastructure
AWS / Azure / GCP AI services
"Run the models" — Compute, inference endpoints

The evidence layer doesn't exist today. Competitors record (logs) but don't prove (cryptographic receipts). They execute (guardrails) but can't verify (third-party auditable).

Q10 Top Competitive Threat

Which competitor worries you most if they executed perfectly over 24 months?

Anthropic.

Not CalypsoAI. Not Credo AI. Not Datadog.

Why Anthropic Worries Us Why They Might Not Execute
Constitutional AI already creates internal governance proofs Conflict of interest (auditing own models)
Actively pursuing healthcare and regulated industries Multi-model world needs vendor-agnostic solution
"Claude Enterprise with built-in attestation" is compelling bundle Model company, not infrastructure company
$8B+ funding means they can build anything

Second Most Worrying

Datadog pivoting from LLM Observability into LLM Attestation.

Least Worrying (Despite Size)

Hyperscalers. They're locked into single-cloud solutions and can't be the neutral third-party verifier enterprises need.

A Supporting Evidence

Design Partners

Partner Status Evidence Category
nVoq Compliance team in diligence PHI/Data Loss Prevention
DeepC Verbal commit (JPM) Third-Party AI Governance
PraxisPro LOI signed Real-Time Decision Support
Mayo Clinic Wants to pilot (blocked on HITRUST) Regulatory Evidence

Patent Portfolio

Family A

Integrated Non-Egress Attestation (co-epoch binding)

Family B

Self-Stabilizing Control (verified receipts only)

Family C

Insurance Risk Pricing (parametric triggers)

Family D

Statistical Safety Signal Protocol (S3P)

JPM Conference Validation

  • ~20 meetings in 2.5 days
  • 100% validation that regulatory compliance is a concrete blocker
  • Key quote (DeepC): "Zero post-deployment visibility into vendor AI behavior"
  • Key quote (FDA consultant): "Companies blocked from approval without proof"