OVERT Standard / v 1.0 · Published
Open Standard · Runtime Evidence

OVERT1.0

Observable Verification Evidence for Runtime Trust — an open standard for independently verifiable runtime evidence across AI systems. Observable. Attested. Content‑safe.

Version
1.0
Status
Published
Published
25 March 2026
Identifier
overt.is/1.0
Steward
Glacis Technologies
§1

Executive Summary

AI governance frameworks tell organizations what controls should exist. They generally do not specify how to produce independent, tamper‑evident proof that those controls actually executed — on a given interaction, under a given configuration, at a given time. That gap leaves regulators with documentation instead of evidence, auditors with narratives instead of cryptographic receipts, and incident responders reconstructing events from operator‑controlled logs.

OVERT closes that gap. It is an open standard for observable verification evidence at the AI runtime boundary. It specifies how to produce independently verifiable records that declared governance policies, security controls, and oversight actions executed — without exporting protected content from the operator’s environment.

Where existing standards define objectives and management processes, OVERT operates one layer beneath: at the runtime boundary where AI systems actually process requests, execute tool calls, and produce outputs. It defines what a conformant runtime control system must prove, what an independent attestation provider must verify, and what a qualified assessor must examine when a conformance claim is made.

The standard applies to any AI system deployed in a setting where governance claims must be verifiable — healthcare, financial services, insurance, employment, federal procurement, and autonomous agentic systems where AI agents execute tool calls and make decisions without step‑by‑step human oversight.

§2

Design Principles

  1. Attestation by construction Controls produce cryptographic proof as a byproduct of execution, not as a separate documentation exercise.
  2. Privacy by architecture Protected content never leaves the operator’s environment. Only hashes and signed receipts cross trust boundaries.
  3. Independence by structure The entity attesting to governance is structurally independent of the entity being governed. Self‑attestation is not compliant.
  4. Statistical rigor by default Safety claims carry confidence intervals, sample sizes, and auditor‑reproducible methodologies. Unquantified assertions are not attestation artifacts.
  5. Open by design Royalty‑free patent covenant for all conformant implementations. Multiple protocol profiles are permitted.
  6. Security‑supporting evidence The attestation architecture occupies the same inline position that security detection requires, producing security‑supporting evidence within the attested scope.
§3

Scope & Contents

  1. Part I Foundations Attestation assurance levels (AAL‑1 through AAL‑4), trust architecture, threat model, cross‑boundary attestation protocol. § 1–4
  2. Part II Governance Domains Six domains — Govern, Identify, Protect, Attest, Measure, Respond — each with normative requirements for evidence generation. § 5–10
  3. Part III Agentic AI Controls Tool‑call governance, MCP server trust, multi‑agent system controls, capability‑based access, human‑in‑the‑loop attestation, persistent state governance, delegation chains, behavioral drift detection. § 11–18
  4. Part IV Architecture Non‑egress attestation, temporal binding, statistical safety measurement, third‑party auditability, legal preservation. § 19–23
  5. Part V Conformance Maturity levels, scope designators, protocol profile registry, independent attestation providers (IAPs), qualified assessor program. § 24–28
  6. Part VI Crosswalks NIST AI RMF · ISO 42001 · EU AI Act · OWASP · NIST SP 800‑53 · FedRAMP · OMB M‑25‑21 / M‑25‑22 · DASF v3.0. Annex A
§4

Key Resources

§5

Machine‑Readable Feed