Version 1.0

OVERT 1.0

Observable Verification Evidence for Runtime Trust

An open standard for independently verifiable runtime evidence across AI systems. Observable. Attested. Content-safe.

Executive Summary

AI governance frameworks tell organizations what controls should exist. They generally do not specify how to produce independent, tamper-evident proof that those controls actually executed — on a given interaction, under a given configuration, at a given time. That gap leaves regulators with documentation instead of evidence, auditors with narratives instead of cryptographic receipts, and incident responders reconstructing events from operator-controlled logs.

OVERT closes that gap. It is an open standard for observable verification evidence at the AI runtime boundary. It specifies how to produce independently verifiable records that declared governance policies, security controls, and oversight actions executed — without exporting protected content from the operator’s environment.

Where existing standards define objectives and management processes, OVERT operates one layer beneath: at the runtime boundary where AI systems actually process requests, execute tool calls, and produce outputs. It defines what a conformant runtime control system must prove, what an independent attestation provider must verify, and what a qualified assessor must examine when a conformance claim is made.

The standard applies to any AI system deployed in a setting where governance claims must be verifiable — healthcare, financial services, insurance, employment, federal procurement, and autonomous agentic systems where AI agents execute tool calls and make decisions without step-by-step human oversight.

Design Principles

What OVERT Covers

PartDescription
FoundationsAttestation assurance levels (AAL-1 through AAL-4), trust architecture, threat model, cross-boundary attestation protocol
Governance DomainsSix domains — Govern, Identify, Protect, Attest, Measure, Respond — each with normative requirements for evidence generation
Agentic AI ControlsTool-call governance, MCP server trust, multi-agent system controls, capability-based access, human-in-the-loop attestation, persistent state governance, delegation chains, behavioral drift detection
ArchitectureNon-egress attestation, temporal binding, statistical safety measurement, third-party auditability, legal preservation
ConformanceMaturity levels, scope designators, protocol profile registry, independent attestation providers (IAPs), qualified assessor program
CrosswalksNIST AI RMF, ISO 42001, EU AI Act, OWASP, NIST SP 800-53, FedRAMP, OMB M-25-21/M-25-22, DASF v3.0

Key Resources

Machine-Readable Feed