Do you know what your AI is actually doing?

See it. Stop the bad stuff. Prove every decision. Proof builds itself.

AI systems are deployed on trust. Trust is not evidence.

Your AI is in production. It handles real decisions for real people. But when someone asks what it did last Tuesday at 2:14 PM and whether that was within policy — you don't have an answer. You have logs. Logs are not evidence.

Every security questionnaire, every audit, every regulatory inquiry comes down to the same question: can you prove it?

Not "do you monitor it." Not "do you have dashboards." Can you produce a cryptographic, tamper-evident record of what your AI did and whether it stayed in bounds. Right now, you can't. Nobody can.

Three layers. Each makes the next possible.

1
Visibility
autoredteam
What is my AI doing?

Automated behavioral assessment. Point autoredteam at any AI system and get a map of what it actually does under stress — drift, deviation, edge cases, failure modes. Open source. Free. pip install glacis-autoredteam.

2
Enforcement
Enforce
What am I doing about it?

Runtime guards that sit in the inference path. Permit, deny, or escalate at the point of execution. Deterministic rules for what is known. Model-based judgment for what is ambiguous. Drift detection. Policy controls. Not after the fact — before it reaches users.

3
Proof
Notary
Can I prove what happened?

Cryptographic attestation for every decision. Notary generates tamper-evident, signed records that prove what your AI did, what policy was applied, and what the outcome was. Not logs. Evidence. Evidence that holds up when someone asks.


One score for AI trustworthiness.

Every interaction that flows through Enforce and Notary feeds a single governance metric: the Glacis Score. A number from 0 to 1000. Derived from real production traffic, not self-assessments.

847
Glacis Score
0 1000

Think FICO for AI. Your board gets a number. Your customers get a number. Your regulator gets a number. Backed by cryptographic proof, not a vendor's word.

Proof builds itself. Every interaction attested. Every attestation improves the score. The evidence chain compounds without anyone doing extra work.

Start with a free assessment. See your own results.

$ pip install glacis-autoredteam
$ autoredteam assess --target your-api-endpoint

# Behavioral assessment running...
✓ 847 probes completed
✓ 12 policy violations detected
✓ 3 drift patterns identified
✓ Report generated: ./glacis-report.html

# You now know what your AI is doing.
# The question is: what are you going to do about it?

autoredteam surfaces the risks. Enforce stops the bad outputs. Notary proves every decision was monitored and enforced. The evidence chain closes itself.

Compliance is not the headline. Compliance is the side effect of running your AI through an enforcement layer that generates cryptographic proof. You get operational control today. The audit trail builds itself.

Built on Overt.

The Overt standard is the open methodology for AI governance assessment. Five frameworks. 169 controls. Published at overt.is. The Glacis Score is the product that operationalizes it — turning a methodology into a measurable, verifiable number derived from production traffic.


Your AI is making decisions right now.

See what it's doing. Stop the bad stuff. Prove every interaction. Starting at $49/mo.