Understanding the Two Approaches
AI governance tools fall into distinct categories based on what they verify and when they verify it. Understanding this distinction is crucial for building a complete compliance posture.
The Policy Documentation Approach (Credo AI)
Credo AI represents the policy-first approach to AI governance. The platform helps organizations:
- Document policies: Create and manage AI governance policies, acceptable use guidelines, and risk frameworks
- Manage workflows: Route AI projects through approval processes, risk assessments, and compliance reviews
- Generate reports: Produce governance artifacts for auditors, boards, and regulators
- Track assessments: Conduct periodic model evaluations and fairness tests
This approach works well for organizations needing to establish governance foundations, align stakeholders, and demonstrate that policies exist. It addresses the "what should we do?" question.
The Runtime Evidence Approach (GLACIS)
GLACIS takes a fundamentally different approach: instead of documenting what controls should exist, it proves those controls actually executed. The platform:
- Generates attestations: Creates cryptographic proof at inference time that specific controls ran
- Produces tamper-evident logs: Records that cannot be retroactively modified without detection
- Verifies execution: Confirms guardrails, filters, and safety mechanisms actually activated
- Maps to frameworks: Links runtime evidence to regulatory requirements (EU AI Act, NIST AI RMF, ISO 42001)
This approach addresses the "did our controls actually work?" question—something that periodic assessments and policy documentation cannot answer.
The Verification Vacuum
Consider a healthcare organization deploying an AI clinical decision support system. They might use Credo AI to:
- Document that the system has PII filtering policies
- Record that bias testing was performed before deployment
- Show that human oversight procedures are documented
But when an auditor, regulator, or plaintiff asks: "Did the PII filter actually run on the inference that exposed patient data?"—Credo AI cannot answer this question. It documents that a PII filter policy exists, not that it executed.
This is the verification vacuum: the gap between "we have controls" and "our controls ran."
The Compliance Blind Spot
Most AI governance platforms focus on pre-deployment documentation—policies, risk assessments, testing records. But regulations like the EU AI Act (Articles 12-13) and HIPAA require ongoing operational evidence. Documentation proves intent; attestation proves execution.
When to Use Each Platform
Choose Credo AI When You Need To:
Establish Governance Foundations
You’re building an AI governance program from scratch and need to define policies, risk categories, and approval workflows.
Align Stakeholders
Multiple teams (legal, engineering, compliance, business) need a shared view of AI governance requirements and responsibilities.
Manage Pre-Deployment Reviews
AI projects need to go through formal risk assessments and approvals before production deployment.
Generate Governance Reports
Board members, investors, or customers need documentation showing your governance posture and policy adherence.
Choose GLACIS When You Need To:
Prove Controls Executed
Regulators, auditors, or legal teams need evidence that specific controls ran on specific inferences.
Defend Against Liability
When something goes wrong, you need tamper-evident records proving your guardrails were active at the time of the incident.
Meet Operational Logging Requirements
Regulations require ongoing evidence of control execution, not just pre-deployment documentation (EU AI Act Article 12).
Close the Audit Gap
SOC 2, ISO 42001, or sector-specific audits need verifiable evidence—not just documented policies.
Using GLACIS and Credo AI Together
These platforms address different layers of the compliance stack. A mature AI governance program might use both:
Complementary Deployment Pattern
Policy Layer (Credo AI)
Define AI governance policies, acceptable use guidelines, and risk categories. Route new AI initiatives through approval workflows.
Assessment Layer (Credo AI)
Conduct pre-deployment risk assessments, fairness testing, and stakeholder sign-offs. Document the evaluation results.
Runtime Layer (GLACIS)
Deploy attestation infrastructure to generate cryptographic proof that the documented controls execute at inference time.
Evidence Layer (GLACIS)
Link runtime attestation records back to documented policies, creating an auditable chain from policy to execution.
This layered approach answers both questions auditors ask: "What controls do you have?" (Credo AI) and "Did those controls run?" (GLACIS).
Regulatory Requirements Mapping
Different regulations emphasize different aspects of AI governance. Understanding which requirements each platform addresses helps build a complete compliance architecture.
Regulatory Coverage
| Regulation / Requirement | Credo AI Coverage | GLACIS Coverage |
|---|---|---|
| EU AI Act Art. 9 (Risk Management) | Strong - Policy documentation | Partial - Supports implementation |
| EU AI Act Art. 12 (Logging) | Limited - Assessment records | Strong - Runtime logging |
| EU AI Act Art. 17 (Quality Management) | Strong - QMS documentation | Partial - Supports verification |
| NIST AI RMF (Govern) | Strong - Policy framework | Partial |
| NIST AI RMF (Measure) | Partial - Assessment metrics | Strong - Runtime metrics |
| ISO 42001 (AI Management System) | Strong - AIMS documentation | Strong - Operational evidence |
| HIPAA (Technical Safeguards) | Limited - Policy-only | Strong - Execution proof |
| SOC 2 (Operating Effectiveness) | Limited - Design evidence | Strong - Operating evidence |
Technical Architecture Differences
Credo AI: Workflow-Centric
Credo AI operates as a governance workflow platform. It integrates with existing tools (Jira, ServiceNow, data catalogs) to create approval processes and documentation trails. The platform is accessed by policy, legal, and compliance teams to manage AI initiatives.
- Integration model: Workflow connectors, API access for metadata
- Data flow: Project metadata and assessment results flow into the platform
- Output: Reports, dashboards, governance artifacts
GLACIS: Pipeline-Centric
GLACIS operates within the AI inference pipeline itself. It wraps or intercepts inference calls to generate attestation records at the point of execution. The platform is deployed by engineering and security teams as infrastructure.
- Integration model: SDK, proxy, or sidecar in the inference path
- Data flow: Inference requests pass through attestation layer
- Output: Cryptographically signed attestation records, audit logs
Decision Framework
Use this framework to determine which platform—or combination—fits your needs:
Primary Need Assessment
If your primary challenge is:
"We don’t have documented AI governance policies or processes"
Start with Credo AI to establish governance foundations
If your primary challenge is:
"We have policies but can’t prove our controls actually run in production"
Start with GLACIS to generate runtime evidence
If your challenge is:
"We need both policy documentation AND operational proof for comprehensive compliance"
Deploy both platforms in complementary roles
An Honest Assessment
As GLACIS, we have an obvious interest in this comparison. So here’s an honest summary:
Credo AI does things GLACIS doesn’t: Policy template libraries, stakeholder collaboration workflows, pre-deployment risk assessment frameworks, governance dashboards for executives. If you need to build an AI governance program from scratch, Credo AI provides valuable structure.
GLACIS does things Credo AI doesn’t: Runtime attestation, cryptographic proof of control execution, tamper-evident logging, per-inference evidence generation. If you need to prove your controls actually work—not just that they’re documented—GLACIS fills a gap that policy platforms cannot address.
Neither platform is “better”—they solve different problems. A complete AI governance architecture likely needs both: policy infrastructure to define what should happen, and attestation infrastructure to prove what did happen.
Frequently Asked Questions
What is the main difference between GLACIS and Credo AI?
GLACIS provides runtime cryptographic attestation that proves AI controls actually executed at inference time. Credo AI provides policy documentation and governance workflow management. GLACIS generates tamper-evident proof; Credo AI manages governance processes and documentation.
Can GLACIS and Credo AI be used together?
Yes. Credo AI can document the policies and governance frameworks, while GLACIS proves those policies actually executed in production. They address different parts of the compliance stack: Credo AI handles policy management, GLACIS provides runtime verification.
Which platform is better for EU AI Act compliance?
The EU AI Act requires both policy documentation (Articles 17-18) and operational evidence (Articles 12-13). Credo AI addresses documentation and quality management requirements. GLACIS addresses runtime logging and evidence requirements. Complete compliance likely requires both approaches.
Is Credo AI a competitor to GLACIS?
Partially. Both platforms serve AI governance needs, but they address different layers. Credo AI competes in the policy management and GRC space. GLACIS competes in the runtime assurance and evidence generation space. Organizations with mature compliance needs often require capabilities from both layers.
What if I already use Credo AI?
GLACIS complements your existing Credo AI deployment. Your documented policies define what controls should run; GLACIS proves those controls actually executed. Think of it as closing the loop between policy definition and operational verification.