Open Standard Version 1.0
Published 2026-03-25 Royalty-free patent covenant
OVERT v1.0
sha-256 7f3a92e8c14b56d0 a8c91f2b4e7d38 0c1f5a9b2e6d47 3a8c91f2b4e7d3 OVERT 1.0
Document
OVERT/1.0/2026-03-25
Status
Published · Review Open
Maintainer
Glacis Technologies, Inc.
License
Royalty-free patent covenant

OVERT 1.0

Observable Verification Evidence for Runtime Trust — an open standard for independently verifiable runtime evidence across AI systems.

OVERT defines how a conformant AI runtime produces tamper-evident, independently verifiable proof that declared governance policies, security controls, and oversight actions executed — without exporting protected content from the operator’s environment.

Observable Attested Content-safe
1 Executive Summary The evidence gap.

AI governance frameworks tell organizations what controls should exist. They generally do not specify how to produce independent, tamper-evident proof that those controls actually executed — on a given interaction, under a given configuration, at a given time. That gap leaves regulators with documentation instead of evidence, auditors with narratives instead of cryptographic receipts, and incident responders reconstructing events from operator-controlled logs.

OVERT closes that gap. It is an open standard for observable verification evidence at the AI runtime boundary. It specifies how to produce independently verifiable records that declared governance policies, security controls, and oversight actions executed — without exporting protected content from the operator’s environment.

Where existing standards define objectives and management processes, OVERT operates one layer beneath: at the runtime boundary where AI systems actually process requests, execute tool calls, and produce outputs. It defines what a conformant runtime control system must prove, what an independent attestation provider must verify, and what a qualified assessor must examine when a conformance claim is made.

The standard applies to any AI system deployed in a setting where governance claims must be verifiable — healthcare, financial services, insurance, employment, federal procurement, and autonomous agentic systems where AI agents execute tool calls and make decisions without step-by-step human oversight.

Attestation topology — cross-boundary evidence flow
2 Design Principles Six invariants.
  1. Attestation by Construction Controls produce cryptographic proof as a byproduct of execution, not as a separate documentation exercise.
  2. Privacy by Architecture Protected content never leaves the operator’s environment. Only hashes and signed receipts cross trust boundaries.
  3. Independence by Structure The entity attesting to governance is structurally independent of the entity being governed. Self-attestation is not compliant.
  4. Statistical Rigor by Default Safety claims carry confidence intervals, sample sizes, and auditor-reproducible methodologies. Unquantified assertions are not attestation artifacts.
  5. Open by Design Royalty-free patent covenant for all conformant implementations. Multiple protocol profiles are permitted.
  6. Security-Supporting Evidence The attestation architecture occupies the same inline position that security detection requires, producing security-supporting evidence within the attested scope.
3 What OVERT Covers Part index.
Foundations Attestation assurance levels (AAL-1 through AAL-4), trust architecture, threat model, cross-boundary attestation protocol.
Governance Domains Six domains — Govern, Identify, Protect, Attest, Measure, Respond — each with normative requirements for evidence generation.
Agentic AI Controls Tool-call governance, MCP server trust, multi-agent system controls, capability-based access, human-in-the-loop attestation, persistent state governance, delegation chains, behavioral drift detection.
Architecture Non-egress attestation, temporal binding, statistical safety measurement, third-party auditability, legal preservation.
Conformance Maturity levels, scope designators, protocol profile registry, independent attestation providers (IAPs), qualified assessor program.
Crosswalks NIST AI RMF, ISO 42001, EU AI Act, OWASP, NIST SP 800-53, FedRAMP, OMB M-25-21/M-25-22, DASF v3.0.
4 Artifacts & Resources Canonical sources.
4.1 Machine-Readable Feed Signed endpoints.