Introduction
AI governance frameworks tell organizations what controls should exist. They generally do not specify how to produce independent, tamper‑evident proof that those controls actually executed — on a given interaction, under a given configuration, at a given time. That gap leaves regulators with documentation instead of evidence, auditors with narratives instead of cryptographic receipts, and incident responders reconstructing events from operator‑controlled logs.
Where existing standards define objectives and management processes, OVERT operates one layer beneath: at the runtime boundary where AI systems actually process requests, execute tool calls, and produce outputs. It defines what a conformant runtime control system must prove, what an independent attestation provider must verify, and what a qualified assessor must examine when a conformance claim is made.
The standard applies to any AI system deployed in a setting where governance claims must be verifiable — healthcare, financial services, insurance, employment, federal procurement, and autonomous agentic systems where AI agents execute tool calls and make decisions without step‑by‑step human oversight.
Design Principles
-
Attestation by construction Controls produce cryptographic proof as a byproduct of execution, not as a separate documentation exercise.
-
Privacy by architecture Protected content never leaves the operator’s environment. Only hashes and signed receipts cross trust boundaries.
-
Independence by structure The entity attesting to governance is structurally independent of the entity being governed. Self‑attestation is not compliant.
-
Statistical rigor by default Safety claims carry confidence intervals, sample sizes, and auditor‑reproducible methodologies. Unquantified assertions are not attestation artifacts.
-
Open by design Royalty‑free patent covenant for all conformant implementations. Multiple protocol profiles are permitted.
-
Security‑supporting evidence The attestation architecture occupies the same inline position that security detection requires, producing security‑supporting evidence within the attested scope.
Scope & Contents
- Part I Foundations Attestation assurance levels (AAL‑1 through AAL‑4), trust architecture, threat model, cross‑boundary attestation protocol. § 1–4
- Part II Governance Domains Six domains — Govern, Identify, Protect, Attest, Measure, Respond — each with normative requirements for evidence generation. § 5–10
- Part III Agentic AI Controls Tool‑call governance, MCP server trust, multi‑agent system controls, capability‑based access, human‑in‑the‑loop attestation, persistent state governance, delegation chains, behavioral drift detection. § 11–18
- Part IV Architecture Non‑egress attestation, temporal binding, statistical safety measurement, third‑party auditability, legal preservation. § 19–23
- Part V Conformance Maturity levels, scope designators, protocol profile registry, independent attestation providers (IAPs), qualified assessor program. § 24–28
- Part VI Crosswalks NIST AI RMF · ISO 42001 · EU AI Act · OWASP · NIST SP 800‑53 · FedRAMP · OMB M‑25‑21 / M‑25‑22 · DASF v3.0. Annex A
Key Resources
- Standard OVERT 1.0 specification PDF
- Standard source text Markdown
- IPR Policy patent covenant, disclosures & licensing HTML
- Review Feedback overt‑[email protected] Mail
Machine‑Readable Feed
- latest.json current version metadata JSON
- feed.json polling feed with all versions JSON
- versions.json complete version index JSON
- latest.md canonical Markdown for the latest release Markdown