Governance Lifecycle

Six Stages of AI Governance

From automatic infrastructure discovery through independent witness verification. See how GLACIS orchestrates the complete closed-loop governance lifecycle.

🔍
Discover
📊
Assess
📋
Intent
🛡️
Enforce
✍️
Attest
🔗
Prove

Stage 1: Discover

What AI is running in your environment? GLACIS automatically discovers your entire AI footprint — model endpoints, agent topologies, pipeline configurations, and runtime infrastructure.

Auto-Discovery Results
Models Detected
GPT-4 (3 instances), Claude-3 (2 instances), Llama-2 (1 instance)
Agent Systems
Clinical decision support, Triage chatbot, Insurance verification
Inference Endpoints
AWS SageMaker (2), Azure OpenAI (1), On-prem GPU cluster (1)
Data Pipelines
Real-time streaming (Kafka), Batch processing (Spark), Vector DB (Pinecone)
Fleet Topology View
fleet-01: 6 AI systems deployed ├── production (4): GPT-4, Claude-3, Llama-2, Mistral ├── staging (2): GPT-4-turbo, Claude-3-opus └── infrastructure: 8 GPUs, 128GB memory, multi-zone

💡 Value unlock: You know what AI is actually running. No surprises in your environment.

Stage 2: Assess

Is it doing what you think it’s doing? Run autoredteam behavioral assessments against your endpoints to identify toxicity, hallucination, PII leakage, jailbreak, and prompt injection risks.

Scan Results: Clinical Decision Support (GPT-4)
Toxicity Audit PASS
100 test cases, 0 toxic outputs. Confidence: 98%
Hallucination Check WARN
5 out of 100 cases fabricated citations. Recommend: retrieval-augmented generation
PII Leakage Test PASS
0 of 100 test patients had data exfiltrated. PHI boundaries intact.
Prompt Injection FAIL
3 jailbreak attempts succeeded. Recommend: input validation layer
Domain Accuracy PASS
Clinical recommendations: 94% accuracy vs. ground truth
Baseline Established

Ready to proceed to intent definition and enforcement policy creation.

💡 Value unlock: You have behavioral baselines. Now you can set meaningful intent policies based on actual risk profile, not generic guardrails.

Stage 3: Define Intent

What does good look like for your use case? Define custom governance policies in TOML. Capture domain-specific rules, thresholds, and controls. Cold-start with GLACIS-proposed baseline; you refine it.

Sample Intent Policy: Clinical AI
[policy] name = "clinical-decision-support" version = "1.0" baseline_temperature = 0.3 safety_level = "strict" [controls.clinical_accuracy] required = true min_confidence = 0.92 timeout_ms = 2000 [controls.phi_boundaries] required = true pii_detection = "strict" redaction = true [controls.model_version] allowed_models = ["gpt-4-turbo", "claude-3-opus"] approved_versions = ["20250101+"] [controls.prompt_injection] input_validation = "strict" max_input_length = 5000 [thresholds] hallucination_rate_max = 0.03 toxicity_score_max = 0.01 domain_accuracy_min = 0.92 latency_p99_ms = 1500 [intent.context] domain = "healthcare" regulation = "HIPAA" criticality = "high" approval_required = true
Cold-Start Baseline

GLACIS proposes baseline policies from:

Assessment Results
Thresholds calibrated to your measured behavior
Domain Expertise
NIST AI RMF + ISO 42001 + healthcare-specific controls
Genomics Example
Custom control: sequence length >2000 bp requires secondary validation
NeMo Guardrails Integration
Compatible with LLM guardrail frameworks for co-enforcement

💡 Value unlock: You own the intent policy. Every governance rule is traceable to your use case, your risks, your requirements. Not vendor defaults.

Stage 4: Enforce

Permit, deny, escalate, or flag — in real time. Arbiter SLM evaluates every inference against intent policies. Start in shadow mode to validate behavior, then move to enforce mode.

Real-Time Enforcement Dashboard
Inferences Processed (Last 1h) 4,372
Across 3 models, 2 endpoints
PERMIT 4,241
97.0% allowed through; intent satisfied
FLAG 98
2.2% raised for human review; edge cases, hallucination warnings
DENY 33
0.8% blocked; prompt injection attempts, unauthorized access patterns
Enforcement Modes
Shadow Mode (Week 1-2)
Evaluate policy against live traffic; observe, don’t block; high false positive tolerance.
Hybrid Mode (Week 3-4)
Enforce on high-confidence violations; flag edge cases for escalation; human-in-loop for decisions.
Enforce Mode (Week 5+)
Full runtime governance. Permit/deny at inference time. Every decision logged and attested.
Fleet Status
fleet-01: 6 systems monitored, ENFORCE mode active ├── gpt-4-prod-01: ✓ 1,247 inferences, 98% PERMIT ├── gpt-4-prod-02: ✓ 1,098 inferences, 99% PERMIT ├── claude-3-prod: ✓ 987 inferences, 96% PERMIT ├── llama-2-prod: ✓ 1,040 inferences, 95% PERMIT ├── staging-a: ⏡ shadow mode, 1,214 inferences observed └── staging-b: ⏡ shadow mode, 342 inferences observed network_state: baseline_hash=0x94d2c4f... infrastructure: 8 GPUs ✓, 128GB RAM ✓, multi-zone ✓

💡 Value unlock: Governance isn’t a policy document gathering dust. It’s running, visible, auditable in real-time across your entire AI fleet.

Stage 5: Attest

Every decision becomes a receipt. OVERT-format attestations are generated automatically. Network state + infrastructure state captured with every inference decision. Tamper-evident chain.

Sample OVERT Attestation
{ "type": "attestation/inference_decision", "timestamp": "2025-03-28T14:23:47.123Z", "chain_entry": 847352, "inference_id": "inf-72d8e1f4c9a2", "decision": "PERMIT", "latency_ms": 187, "model": { "name": "gpt-4-turbo", "version": "20250101", "endpoint": "prod-us-east-1" }, "controls_executed": [ { "name": "phi_boundaries", "status": "executed", "result": "pass", "pii_detected": 0 }, { "name": "clinical_accuracy", "status": "executed", "result": "pass", "confidence": 0.96 }, { "name": "prompt_injection", "status": "executed", "result": "pass" } ], "network_hash": "0xf7c3e...82a1", "infrastructure_hash": "0x2b9a...61d4", "witness": { "node_id": "witness-03-us-east", "signature": "0x8f2e9c...", "chain_position": 847352 } }
What Gets Attested
Control Execution
Which controls ran, what they evaluated, pass/warn/fail results
Model Identity
Exact model, version, endpoint; cryptographically verifiable
Network State
Hash of deployed configuration; proves you weren’t running something else
Infrastructure State
GPU, memory, dependencies, OS version; proves execution environment
Witness Signature
Third party cryptographically signs the decision; tamper-evident
Attestation Ledger (Last 24h)
Attestations: 312,847 Permit: 303,512 (97.0%) Flag: 6,891 (2.2%) Deny: 2,444 (0.8%) Witness nodes active: 4 (us-east, us-west, eu-central, ap-sg) Chain integrity: ✓ all 312,847 entries verified Last entry: #847352 @ 2025-03-28T14:23:47Z

💡 Value unlock: Every inference decision is cryptographically signed and witnessed. You have evidence — not just policy theater.

Stage 6: Prove

When someone who doesn’t trust you asks for evidence. Export evidence bundles mapped to NIST AI RMF, ISO 42001, and regulatory frameworks. Independent witness verification proves controls actually ran.

Evidence Bundle: Clinical AI Governance
Behavioral Baseline (from Stage 2)
autoredteam scan results; hallucination: 3%, toxicity: 0%, PHI leakage: 0%
Intent Policy (from Stage 3)
Signed TOML policy; controls, thresholds, domain mappings
Enforcement Attestations (7-day sample)
45,293 decisions; 97% permitted, 2% flagged, 1% denied; all witnessed
Witness Signatures
4 independent nodes; cryptographic proofs of decision authenticity
Mapping to Standards
NIST AI RMF: MAP-1.1, MAP-1.2, MEASURE-2.1, MEASURE-4.1 / ISO 42001: 6.2.1, 6.2.2, 7.2
Compliance Scenarios
FDA Audit
Export: full attestation ledger, control execution logs, witness certificates. Proves: controls ran, evidence preserved, third-party verified.
Insurance Claim
Export: behavioral baseline, policy enforcement logs, incident reports. Proves: governance was in place, violations were prevented/logged.
Board Presentation
Export: executive summary, risk dashboard, evidence pack. Proves: you’re not policy-only; controls are real, auditable, witnessed.
Customer Assurance
Export: compliance report, sample attestations, witness node info. Proves: customer data is governed independently, tamper-evident.
Evidence Pack Export
GLACIS_EVIDENCE_PACK_v2 ├── bundle_id: bundle-8f3c9e2d ├── created: 2025-03-28T14:24:00Z ├── period: 2025-03-21 to 2025-03-28 ├── attestations.json: 45,293 records ├── policy.toml.signed: governance intent ├── baseline.json: behavioral assessment ├── witness_certs/: [node1.pem, node2.pem, ...] ├── nist_mapping.md: RMF compliance ├── iso_42001_mapping.md: 42001 compliance └── verification.sh: independent proof checker

💡 Value unlock: You own the proof. Not a trust request. Not a compliance checklist. Cryptographic evidence that governance actually happened.

Stage 1 of 6

Ready to close the loop?

See your AI governance lifecycle in action. Start with autoredteam behavioral assessment, or request a live demo of the full closed-loop platform.