OVERT 1.0 — the open standard for AI runtime trust — is live. Read the standard →
Colorado AI Act Jun 30, 2026 | EU AI Act Aug 2, 2026 | California ADMT Jan 1, 2026

Visibility → Enforcement → Proof

Your AI is in production.
Do you know what it’s doing?

Agents are deploying faster than teams can monitor them. GLACIS gives you behavioral benchmarks in minutes, runtime enforcement when you’re ready, and cryptographic proof that your controls actually ran.

Free, open source. pip install glacis-autoredteam

terminal
# Point at any AI system. Get behavioral benchmarks.
$ pip install glacis-autoredteam
$ auto-redteam scan --target https://api.example.com/v1/chat
Scanning model endpoint...
Toxicity probe passed (0.02 / 0.15 threshold)
Hallucination check passed (0.04 / 0.10 threshold)
PII leak probe warning (0.08 / 0.05 threshold)
Prompt injection passed (0.01 / 0.10 threshold)
Jailbreak resistance passed (0.03 / 0.15 threshold)
—————————————————————
4/5 passed · 1 warning · Score: 87/100
Report: ./report.html

The journey

Visibility first.
Enforcement second.
Proof third.

Start with a free scan of any AI system. When you’re ready, add runtime enforcement. Proof accumulates automatically while you work.

1 Explore

auto-redteam

Free · Open Source

Point at any AI system. Get behavioral benchmarks in minutes. Toxicity, hallucination, jailbreak resistance, PII leakage, prompt injection — all version-pinned and repeatable.

pip install glacis-autoredteam
auto-redteam.com
2 Monitor & Control

arbiter

SaaS Subscription

Sidecar runtime + cloud control plane. Continuous enforcement of governance policies across your entire AI fleet. Drift detection, shadow-to-enforce mode, real-time dashboards.

  • Fleet-wide visibility
  • Permit / deny / escalate
  • Purpose-built SLM (zero egress)
Learn more
3 Witness

witness network

Enterprise

Cryptographic receipts for every consequential AI decision. Third-party witnessed, tamper-evident, independently verifiable. Compliance for free — proof accumulates while you use the other tools.

  • Tamper-proof audit trail
  • Independent verification
  • Zero-egress architecture
Learn more

Featured in

How it works

Nobody else spans visibilityenforcement → proof

Eval tools stop at benchmarks. GRC platforms stop at dashboards. GLACIS connects the entire journey — from your first scan to a cryptographic receipt that stands up in regulatory review.

auto-redteam arbiter witness

One tool, three stages

Scan your AI today. Add enforcement when you’re ready. Evidence accumulates automatically. No manual stitching, no separate tools, no context loss.

YOUR ENVIRONMENT SLM your data NO EGRESS

Purpose-built SLM

Our specialized small language model runs locally, producing version-pinned scores. Your prompts, responses, and PHI never leave your environment.

rcpt rcpt rcpt rcpt rcpt +chain

Evidence that compounds

Every interaction generates a cryptographic receipt. Over time, you build an auditable evidence chain — so when compliance becomes a requirement, your posture is already built.

VISIBILITY ENFORCEMENT PROOF promptfoo drata / vanta GLACIS

Beyond dashboards

Eval tools give you a snapshot. Compliance platforms give you a checkbox. GLACIS gives you independently verifiable, per-inference evidence that your controls actually ran.

Zero Egress

Data stays local

Shadow → Enforce

Go live when ready

Tamper-proof

Crypto signatures

<50ms

Total overhead (p95)

Why now

Agentic sprawl meets regulatory convergence

AI agents are deploying at a pace that outstrips every governance mechanism built for the last generation of software. The regulatory response is accelerating at the same speed.

The Security Problem

The OpenClaw moment

OpenClaw hit 247k GitHub stars and 12% of its marketplace was immediately compromised. Agent ecosystems are growing faster than anyone can vet them. If you can’t see what your AI is doing, you can’t secure it.

Every compromised agent tool is a potential supply chain attack on every system that calls it. Visibility isn’t optional — it’s the prerequisite.

The Compliance Problem

Three deadlines, one summer

Colorado AI Act — June 2026

Rebuttable presumption of reasonable care for deployers who demonstrate NIST AI RMF compliance with verifiable evidence.

EU AI Act — August 2026

High-risk AI obligations go live. Articles 12 and 14 require logging, human oversight, and verifiable records.

California ADMT — Already in effect

Automated decision-making technology disclosures and opt-out requirements.

The window is closing.

Organizations that build visibility now will have enforcement and proof by the time deadlines hit. Organizations that wait will be scrambling.

See it in action

Watch Governance Enforced in Real Time

An AI request arrives...

AI Pipeline
Clinical Decision Support
Input
PHI Detection
Pending
Safety Check
Pending
Model v3.2
Pending
Policy Decision
Pending
Output
Waiting...
Attestation
Third-Party Witness
Standing by
Chain Entry #47,832
Waiting...

“We would never accept this for any other critical system. Financial systems have audit trails. Medical devices have mandated records. Aircraft have flight recorders. AI systems need the same level of verifiable evidence.”

Joe Braidwood
Joe Braidwood
CEO, GLACIS · Previously SwiftKey (acquired by Microsoft)

Who we help

Visibility and proof for every stakeholder in your AI pipeline

FAQ

Common questions

What is auto-redteam?

It’s our free, open-source behavioral benchmarking tool. Point it at any AI endpoint and get repeatable scores for toxicity, hallucination, prompt injection, PII leakage, and more. Version-pinned, so your results are comparable across runs and models.

We already have SOC 2 / are working toward HITRUST

Great — those cover IT controls. AI-specific assurance addresses model behavior, decision audit trails, and content safety risks that SOC 2 and HITRUST don’t cover. They’re complementary.

How is this different from Promptfoo or other eval tools?

Eval tools give you a snapshot. GLACIS gives you the full journey: behavioral benchmarks (auto-redteam) feed into runtime enforcement (Arbiter), which generates cryptographic proof (OVERT). Nobody else connects visibility to enforcement to independently verifiable evidence.

Does GLACIS just monitor, or does it actually enforce?

Both. You define controls declaratively — which guardrails to enforce, at what confidence thresholds, with what failure modes. The Arbiter evaluates every AI request against your active policy and makes real-time permit/deny decisions. You start in shadow mode (observe only) and transition to enforcement when ready. The transition itself is attested.

What if we’re not ready for enforcement yet?

Start with auto-redteam — it’s free and gives you immediate visibility. When you’re ready to enforce, add Arbiter. When you need proof for auditors or regulators, turn on OVERT. Each layer works independently, but they’re stronger together.