Colorado AI Act Jun 30, 2026 | EU AI Act Aug 2, 2026 | California ADMT Jan 1, 2026

Make every AI control independently verifiable.

Trust infrastructure for AI.

Built on ATLAS, the open standard for AI governance verification

GLACIS is the canonical implementation of ATLAS — an open standard for attestable AI governance. It turns controls into independently verifiable receipts. Start with an Evidence Pack for procurement and audit, then expand into continuous runtime verification as you scale.

Land with an Evidence Pack. Expand into continuous verification beside Vanta, Drata, and existing GRC. See how it fits →
Pango holding cryptographic proof receipt
SeaHealthTech panel: From Promises to Proof in Healthcare, AI House, Seattle
Watch Panel 58 min
Featured February 2026 · Seattle

“Your AI Needs an Alibi”

Washington State’s Chief Privacy Officer told a room full of healthcare builders what’s coming. A CMIO overseeing 12 hospitals asked us to build exactly what we’re building. A Medicare payer described our product without knowing it exists.

“We have to get certification from vendors that they’re using an AI governance program like NIST AI Risk Management Framework… In healthcare, you’re almost always walking into the high-risk space.”

KATY RUCKLE — Chief Privacy Officer, Washington State

State Chief Privacy Officer CMIO, MultiCare Head of Emerging Tech, SCAN Health Tech Attorney
Read the full recap

Why verification matters

Independent verification is the missing layer

You already have policies, controls, and good intentions. GLACIS adds the structural property AI has been missing: third parties can verify that governance actually executed, without trusting the operator’s own logs.

From Policies to Proof

Your policies describe what should happen. GLACIS verifies what did happen — at runtime, on every AI interaction.

Independent, Not Self-Attested

Dashboards and logs are great for operations. GLACIS adds the independence layer — third-party witnessed evidence that stands up in regulatory and legal contexts.

Tamper-Proof Receipts That Controls Ran

GLACIS enforces controls at the inference and tool boundary and emits third-party witnessed receipts for every consequential decision. Like a flight data recorder — neither the operator nor the auditor has to trust your app logs.

The reasonable-care advantage

Colorado’s AI Act creates a rebuttable presumption of reasonable care for organizations that demonstrate NIST AI RMF or ISO 42001 compliance with verifiable evidence. The EU AI Act rewards the same posture. GLACIS gives you that evidence automatically — turning compliance from a cost center into a competitive advantage.

Better together

GRC Platforms + Runtime Proof

GRC platforms like Vanta and Drata document what you say you do. GLACIS adds independently verifiable, per-inference runtime receipts with third-party witnessing. Together, your compliance story is complete.

Capability GRC Platforms GLACIS
Policy documentation Documents what you say you do Documents policies AND proves they executed at inference time
SOC 2 / ISO 27001 Core strength Maps to these + AI frameworks
AI-specific frameworks
NIST AI RMF, ISO 42001
Limited or manual ISO 42001 · NIST AI RMF · EU AI Act · Colorado AI Act — native mapping with automated evidence
Runtime evidence First-party compliance evidence Signed receipt per AI decision — model version, controls applied, policy outcome, timestamp
Real-time monitoring Periodic evidence collection Continuous attestation
Third-party witnessed proof First-party evidence Independently witnessed — per-inference runtime receipts with third-party signing, verifiable without trusting your app logs
Zero-egress architecture Not applicable Only hashes cross trust boundary
Colorado reasonable-care defense Policies necessary, not sufficient Evidence of NIST AI RMF adherence supports your showing of reasonable care
Independent verifiability Policy documentation Cryptographic evidence structured for audit and regulatory review
Customer data visibility Platform-dependent We never see your prompts, responses, or PHI — only cryptographic commitments. Zero-egress means plaintext content never leaves your environment.

How it works

Every AI Decision, Proven

GLACIS evaluates AI requests against your governance policy — permit, deny, escalate, or flag. Each decision on configured controls generates independently witnessed evidence. Plaintext content never leaves your environment. Only cryptographic proof crosses the trust boundary.

1 You define the policy
2 GLACIS enforces it
3 Every decision is witnessed

You See Everything. We See Nothing.

What your team sees

  • Full request and response content
  • Policy evaluations with pass/fail details
  • Complete audit trail, searchable by date, system, or outcome
  • Dashboards for quality assurance and improvement
  • All stored in your environment

What leaves your environment

  • Only HMAC’d commitments (hashed, not reversible)
  • Zero PHI. Zero request content. Zero response content.
  • Cryptographic proof that controls executed — nothing more.

Evidence generation adds less than 50ms per inference. Storage costs less than your current observability stack.

See it in action

Watch Governance Enforced in Real Time

An AI request arrives...

AI Pipeline
Clinical Decision Support
Input
PHI Detection
Pending
Safety Check
Pending
Model v3.2
Pending
Policy Decision
Pending
Output
Waiting...
Attestation
Third-Party Witness
Standing by
Chain Entry #47,832
Waiting...

How It Works

From policy to proof in three steps

Define your governance posture. GLACIS enforces it at runtime and seals every decision with a tamper-proof, third-party-verifiable receipt — evidence your auditors, regulators, and buyers can trust.

Pango watching over AI
Define your posture declarative policies
We enforce & witness every decision attested
You get evidence third-party verified

Zero Egress

Data stays local

Inline enforcement

Shadow to enforce

Tamper-proof

Crypto signatures

<50ms

Total overhead (p95)

“We would never accept this for any other critical system. Financial systems have audit trails. Medical devices have mandated records. Aircraft have flight recorders. AI systems need the same level of verifiable evidence.”

Joe Braidwood
Joe Braidwood
CEO, GLACIS · Previously SwiftKey (acquired by Microsoft)

As seen in

FAQ

Common questions

We already have SOC 2 / are working toward HITRUST

Great — those cover IT controls. AI-specific assurance addresses model behavior, decision audit trails, and content safety risks that SOC 2 and HITRUST don’t cover. They’re complementary.

How is this different from our existing documentation?

Documentation describes what should happen. GLACIS provides independent proof of what actually happened — verifier-ready receipts that your controls executed, not just that policies exist. Beyond evidence, GLACIS also enforces runtime controls — it doesn’t just prove what happened, it ensures the right thing happens in the first place.

What industries do you work with?

We work with AI teams in regulated industries including healthcare, financial services, insurance, and enterprise. The common thread is needing to prove AI controls work, not just that policies exist.

What if we’re not ready for a full attestation program?

That’s fine. We offer focused engagements for teams who need to unblock deals now. Start with what you need, expand later.

Does GLACIS just monitor, or does it actually enforce?

Both. You define controls declaratively — which guardrails to enforce, at what confidence thresholds, with what failure modes. The GLACIS arbiter evaluates every AI request against your active policy and makes real-time permit/deny decisions. Every enforcement decision is independently attested. You start in shadow mode (observe only) and transition to enforcement when ready. The transition itself is attested.