New  ·  Your AI Needs an Alibi  →

Healthcare · Finance · Insurance · AI Labs

The Execution &
Evidence Layer for AI

GLACIS enforces your AI governance policies at runtime and creates cryptographic evidence of every decision. Define controls. Deploy in minutes. Prove compliance to any third party — with zero data egress.

SDK Available pip install glacis
Witness & compliance services in beta
Pango holding cryptographic proof receipt
SeaHealthTech panel: From Promises to Proof in Healthcare, AI House, Seattle
Watch Panel 58 min
Featured February 2026 · Seattle

“Your AI Needs an Alibi”

Washington State’s Chief Privacy Officer told a room full of healthcare builders what’s coming. A CMIO overseeing 12 hospitals asked us to build exactly what we’re building. A Medicare payer described our product without knowing it exists.

“We have to get certification from vendors that they’re using an AI governance program like NIST AI Risk Management Framework… In healthcare, you’re almost always walking into the high-risk space.”

KATY RUCKLE — Chief Privacy Officer, Washington State

State Chief Privacy Officer CMIO, MultiCare Head of Emerging Tech, SCAN Health Tech Attorney
Read the full recap

The challenge

The Proof Gap in AI

Today, there’s no standard way to enforce AI governance controls inline at the point of inference AND prove they executed. Policies describe intent. Dashboards show configuration. But neither enforces at runtime with independent evidence.

Documentation Isn’t Proof

Policies and procedures describe what should happen. They don’t verify what did happen when the model ran.

Observability Isn’t Evidence

Dashboards and logs are useful for debugging. But self-maintained records lack the independence needed for compliance and liability.

Inline Enforcement + Independent Attestation

GLACIS sits in the request path, evaluates every interaction against your governance policy, and generates witnessed evidence of every decision. Enforcement and evidence, inseparable.

Why this matters now

When Sharp HealthCare faced a class action over their AI scribe in November 2025, the core issue was evidence: they needed to demonstrate what the AI actually did. As AI systems take on more responsibility in regulated industries, the ability to prove control execution — not just assert it — becomes essential.

How it works

Zero-Egress Enforcement & Evidence

The GLACIS arbiter sits inline in your AI request path. Every request is evaluated against your active governance policy — permit, deny, escalate, or flag. Every enforcement decision generates a cryptographic receipt, hashed locally and anchored to an independent witness network. Sensitive payloads never leave your environment.

You define the policy
GLACIS enforces it
Every decision is witnessed

See it in action

Watch Governance Enforced in Real Time

An AI request arrives...

AI Pipeline
Clinical Decision Support
Input
PHI Detection
Pending
Safety Check
Pending
Model v3.2
Pending
Policy Decision
Pending
Output
Waiting...
Attestation
Third-Party Witness
Standing by
Chain Entry #47,832
Waiting...

Integration

Add proof in 5 lines of code

Install the Python SDK, wrap your AI calls, and every prompt, response, tool call, and policy decision gets sealed with a tamper-proof receipt — witnessed by our live attestation service.

Talk to Sales
pip install glacis · SDK available now
Pango watching over AI
Define your posture declarative policies
We enforce & witness every decision attested
You get evidence third-party verified

Zero Egress

Data stays local

Inline enforcement

Shadow to enforce

Tamper-proof

Crypto signatures

~5ms

Zero slowdown

"We would never accept this for any other critical system. Financial systems have audit trails. Medical devices have mandated records. Aircraft have flight recorders. AI systems need the same level of verifiable evidence."

Joe Braidwood
Joe Braidwood
CEO, GLACIS · Previously SwiftKey (acquired by Microsoft)

As seen in

FAQ

Common questions

We already have SOC 2 / are working toward HITRUST

Great — those cover IT controls. AI-specific assurance addresses model behavior, decision audit trails, and content safety risks that SOC 2 and HITRUST don’t cover. They’re complementary.

How is this different from our existing documentation?

Documentation describes what should happen. GLACIS provides cryptographic proof of what actually happened — third-party witnessed evidence that your controls executed, not just that policies exist. Beyond evidence, GLACIS also enforces your governance policies at runtime — it doesn’t just prove what happened, it ensures the right thing happens in the first place.

What industries do you work with?

We work with AI teams in regulated industries including healthcare, financial services, insurance, and enterprise. The common thread is needing to prove AI controls work, not just that policies exist.

What if we’re not ready for a full compliance program?

That’s fine. We offer focused engagements for teams who need to unblock deals now. Start with what you need, expand later.

Does GLACIS just monitor, or does it actually enforce?

Both. You define your governance policies declaratively — which controls to enforce, at what confidence thresholds, with what failure modes. The GLACIS arbiter evaluates every AI request against your active policy and makes real-time permit/deny decisions. Every enforcement decision is independently attested. You start in shadow mode (observe only) and transition to enforcement when ready. The transition itself is attested.

Pango celebrating

Learn more about GLACIS

We’d love to hear about what you’re building.

Get in Touch