New: Voluntary AI Safety Just Died — Here’s What Replaces It  →

Every AI vendor says their systems are safe.

Prove it.

Your policies say what should happen. GLACIS proves what actually did — cryptographic evidence of every AI decision, third-party witnessed, zero data egress. The independent proof your auditors, buyers, and regulators will accept.

Works alongside Vanta, Drata & existing GRC See how →
Pango holding cryptographic proof receipt
SeaHealthTech panel: From Promises to Proof in Healthcare, AI House, Seattle
Watch Panel 58 min
Featured February 2026 · Seattle

“Your AI Needs an Alibi”

Washington State’s Chief Privacy Officer told a room full of healthcare builders what’s coming. A CMIO overseeing 12 hospitals asked us to build exactly what we’re building. A Medicare payer described our product without knowing it exists.

“We have to get certification from vendors that they’re using an AI governance program like NIST AI Risk Management Framework… In healthcare, you’re almost always walking into the high-risk space.”

KATY RUCKLE — Chief Privacy Officer, Washington State

State Chief Privacy Officer CMIO, MultiCare Head of Emerging Tech, SCAN Health Tech Attorney
Read the full recap

The problem

You’re grading your own homework

Your AI vendor says their controls are working. Your documentation says the right policies are in place. But when an auditor, regulator, or plaintiff’s attorney asks for proof — actual evidence that controls executed on a specific interaction — nobody has it.

Documentation Isn’t Proof

Policies and procedures describe what should happen. They don’t verify what did happen when the model ran.

Observability Isn’t Evidence

Dashboards and logs are useful for debugging. But self-maintained records lack the independence needed for regulatory defense and liability protection.

Independent Evidence That Controls Ran

GLACIS enforces your governance policy on every AI interaction and generates third-party witnessed evidence of every decision. Like a flight data recorder — neither the pilot nor the airline controls it.

This is already happening

When Sharp HealthCare faced a class action over their AI scribe in November 2025, the core question was evidence: could they demonstrate what the AI actually did? Meanwhile, Colorado’s AI Act creates a safe harbor for organizations that can prove reasonable care — but only with evidence of control execution, not just policies on paper.

Better together

GRC Platforms + Runtime Evidence

GRC platforms like Vanta and Drata prove you have AI policies. GLACIS proves you followed them. Together, your compliance story is complete — and it fits in your existing budget.

Capability GRC Platforms GLACIS
Policy documentation Documents what you say you do Also documents policies
SOC 2 / ISO 27001 Core strength Maps to these + AI frameworks
AI-specific frameworks
NIST AI RMF, ISO 42001
Limited or manual Native mapping, automated evidence
Runtime evidence Not in scope Cryptographic proof per inference
Real-time monitoring Point-in-time audits Continuous attestation
Third-party witnessed proof Internal audit logs Independent witness network
Zero-egress architecture Not applicable Only hashes cross trust boundary
Colorado safe harbor activation Policies necessary, not sufficient Evidence of NIST AI RMF adherence activates safe harbor
Survives cross-examination Policy documentation Tamper-proof cryptographic evidence

How it works

Every AI Decision, Proved

GLACIS evaluates every AI request against your governance policy — permit, deny, escalate, or flag. Every decision generates independently witnessed evidence. Your data never leaves your environment. Only cryptographic proof crosses the trust boundary.

You define the policy
GLACIS enforces it
Every decision is witnessed

See it in action

Watch Governance Enforced in Real Time

An AI request arrives...

AI Pipeline
Clinical Decision Support
Input
PHI Detection
Pending
Safety Check
Pending
Model v3.2
Pending
Policy Decision
Pending
Output
Waiting...
Attestation
Third-Party Witness
Standing by
Chain Entry #47,832
Waiting...

Integration

Add proof in 5 lines of code

Install the Python SDK, wrap your AI calls, and every prompt, response, tool call, and policy decision gets sealed with a tamper-proof receipt — witnessed by our live attestation service.

Free Governance Assessment
pip install glacis · SDK available now
Pango watching over AI
Define your posture declarative policies
We enforce & witness every decision attested
You get evidence third-party verified

Zero Egress

Data stays local

Inline enforcement

Shadow to enforce

Tamper-proof

Crypto signatures

~5ms

Zero slowdown

“We would never accept this for any other critical system. Financial systems have audit trails. Medical devices have mandated records. Aircraft have flight recorders. AI systems need the same level of verifiable evidence.”

Joe Braidwood
Joe Braidwood
CEO, GLACIS · Previously SwiftKey (acquired by Microsoft)

As seen in

FAQ

Common questions

We already have SOC 2 / are working toward HITRUST

Great — those cover IT controls. AI-specific assurance addresses model behavior, decision audit trails, and content safety risks that SOC 2 and HITRUST don’t cover. They’re complementary.

How is this different from our existing documentation?

Documentation describes what should happen. GLACIS provides cryptographic proof of what actually happened — third-party witnessed evidence that your controls executed, not just that policies exist. Beyond evidence, GLACIS also enforces your governance policies at runtime — it doesn’t just prove what happened, it ensures the right thing happens in the first place.

What industries do you work with?

We work with AI teams in regulated industries including healthcare, financial services, insurance, and enterprise. The common thread is needing to prove AI controls work, not just that policies exist.

What if we’re not ready for a full attestation program?

That’s fine. We offer focused engagements for teams who need to unblock deals now. Start with what you need, expand later.

Does GLACIS just monitor, or does it actually enforce?

Both. You define your governance policies declaratively — which controls to enforce, at what confidence thresholds, with what failure modes. The GLACIS arbiter evaluates every AI request against your active policy and makes real-time permit/deny decisions. Every enforcement decision is independently attested. You start in shadow mode (observe only) and transition to enforcement when ready. The transition itself is attested.