OVERT 1.0 · public standard · published 25 Mar 2026
Royalty-free · maintained at glacis.io
Public Standard  /  Proof & Notary Layer  /  Reg. OVERT–1.0

OVERT 1.0

Observable Verification Evidence for Runtime Trust.

An open standard for independently verifiable runtime evidence across AI systems. Observable. Attested. Content-safe.

Version
1.0
Status
Public
Published
2026‑03‑25
Steward
Glacis Tech.
License
Royalty-free
[§] Download PDF [➞] Read Markdown Cite as — Glacis Technologies. OVERT 1.0. overt.is, 2026.
Art. I · Executive Summary

§The evidence gap, and what this standard closes.

AI governance frameworks tell organizations what controls should exist. They generally do not specify how to produce independent, tamper-evident proof that those controls actually executed — on a given interaction, under a given configuration, at a given time. That gap leaves regulators with documentation instead of evidence, auditors with narratives instead of cryptographic receipts, and incident responders reconstructing events from operator-controlled logs.

OVERT closes that gap. It is an open standard for observable verification evidence at the AI runtime boundary. It specifies how to produce independently verifiable records that declared governance policies, security controls, and oversight actions executed — without exporting protected content from the operator’s environment.

Where existing standards define objectives and management processes, OVERT operates one layer beneath: at the runtime boundary where AI systems actually process requests, execute tool calls, and produce outputs. It defines what a conformant runtime control system must prove, what an independent attestation provider must verify, and what a qualified assessor must examine when a conformance claim is made.

The standard applies to any AI system deployed in a setting where governance claims must be verifiable — healthcare, financial services, insurance, employment, federal procurement, and autonomous agentic systems where AI agents execute tool calls and make decisions without step-by-step human oversight.

Art. II · Design Principles

§Six commitments the standard enforces on every conformant implementation.

  1. Attestation by construction Controls produce cryptographic proof as a byproduct of execution, not as a separate documentation exercise.
  2. Privacy by architecture Protected content never leaves the operator’s environment. Only hashes and signed receipts cross trust boundaries.
  3. Independence by structure The entity attesting to governance is structurally independent of the entity being governed. Self-attestation is not compliant.
  4. Statistical rigor by default Safety claims carry confidence intervals, sample sizes, and auditor-reproducible methodologies. Unquantified assertions are not attestation artifacts.
  5. Open by design Royalty-free patent covenant for all conformant implementations. Multiple protocol profiles are permitted.
  6. Security-supporting evidence The attestation architecture occupies the same inline position that security detection requires, producing security-supporting evidence within the attested scope.
Art. III · Scope

§What OVERT covers.

PartDescription
Foundations Attestation assurance levels (AAL-1 through AAL-4), trust architecture, threat model, cross-boundary attestation protocol
Governance Domains Six domains — Govern, Identify, Protect, Attest, Measure, Respond — each with normative requirements for evidence generation
Agentic AI Controls Tool-call governance, MCP server trust, multi-agent system controls, capability-based access, human-in-the-loop attestation, persistent state governance, delegation chains, behavioral drift detection
Architecture Non-egress attestation, temporal binding, statistical safety measurement, third-party auditability, legal preservation
Conformance Maturity levels, scope designators, protocol profile registry, independent attestation providers (IAPs), qualified assessor program
Crosswalks NIST AI RMF, ISO 42001, EU AI Act, OWASP, NIST SP 800-53, FedRAMP, OMB M-25-21/M-25-22, DASF v3.0
Art. IV · Resources

§Normative texts and machine-readable feeds.

Normative texts

Machine-readable feeds

  • latest latest.json Current version metadata.
  • feed feed.json Polling feed with all versions.
  • versions versions.json Complete version index.
  • latest (md) latest.md Canonical Markdown for the latest release.