Compliance Guide • Updated December 2025

Colorado AI Act Compliance Guide

Complete guide to SB 24-205. Requirements for high-risk AI systems, compliance deadlines, and implementation roadmap.

18 min read 5,500+ words
Joe Braidwood
Joe Braidwood
CEO, GLACIS
18 min read

Executive Summary

On May 17, 2024, Colorado Governor Jared Polis signed SB 24-205, making Colorado the first US state to enact comprehensive legislation regulating AI systems used in consequential decisions. Effective June 30, 2026 (postponed from the original February 1, 2026 date), the law establishes mandatory requirements for developers and deployers of high-risk AI systems.[1][2]

The Colorado AI Act targets algorithmic discrimination—AI-driven bias in employment, housing, credit, healthcare, education, insurance, government services, and legal services. Unlike the EU AI Act's prescriptive approach, Colorado requires "reasonable care" and provides an affirmative defense for organizations following recognized risk management frameworks like NIST AI RMF and ISO 42001.[3]

Key takeaway: Organizations have 18 months to implement risk management policies, conduct impact assessments, and establish consumer disclosure mechanisms. The Colorado Attorney General has exclusive enforcement authority with penalties up to $20,000 per violation, but offers a 60-day cure period for good-faith compliance efforts.

Jun 30, 2026
Effective Date[2]
$20K
Max Penalty/Violation[1]
1st State
Comprehensive AI Law[3]
8 Domains
High-Risk Categories[4]

In This Guide

What is the Colorado AI Act?

The Colorado Artificial Intelligence Act (SB 24-205), signed into law on May 17, 2024, represents the first comprehensive state-level AI regulation in the United States. While other states have passed targeted AI bills addressing specific use cases (like Illinois' biometric privacy law or New York City's automated employment decision tool law), Colorado's legislation establishes broad requirements governing AI systems across multiple high-stakes domains.[1][3]

The law is modeled conceptually on the EU AI Act but adapted to US legal frameworks. Rather than the EU's risk-tiered classification system with prohibited uses, limited-risk categories, and extensive compliance obligations, Colorado takes a more streamlined approach: it identifies "high-risk" AI systems based on their use in consequential decisions and requires both developers and deployers to exercise "reasonable care" to prevent algorithmic discrimination.[4]

Legislative History and Timeline

  • May 17, 2024: Governor Polis signs SB 24-205 into law
  • August 28, 2025: Governor Polis signs SB 25B-004, postponing implementation
  • June 30, 2026: New effective date for compliance requirements[2]
  • February 1, 2027: Deployer disclosure and impact assessment requirements take effect[4]

The five-month postponement from February 1, 2026 to June 30, 2026 gives organizations additional time to establish compliance infrastructure, but also reflects industry concerns about implementation feasibility raised during the initial comment period.

Scope & Applicability

The Colorado AI Act applies to any person or entity "doing business in Colorado" that develops or deploys high-risk AI systems. This broad jurisdictional language means that organizations headquartered outside Colorado must comply if they serve Colorado residents or make AI-driven decisions affecting them.[1]

Who Must Comply

The law establishes two distinct regulated parties with different obligations:

Developers

Persons doing business in Colorado who develop or substantially modify an AI system. This includes foundation model providers, algorithm developers, and companies that customize third-party AI systems beyond basic configuration.

Deployers

Persons doing business in Colorado who deploy a high-risk AI system. This includes employers using AI in hiring, lenders using AI in credit decisions, landlords using tenant screening tools, and healthcare providers using clinical AI.

Important note: An organization can be both a developer and a deployer. For example, a healthcare system that builds its own clinical decision support AI and deploys it internally must comply with both sets of requirements.

What Qualifies as High-Risk

An AI system becomes "high-risk" when it is deployed to make, or is a substantial factor in making, a consequential decision. The law defines consequential decisions as those with a "material legal or similarly significant effect" on consumers in eight domains:[4]

High-Risk AI Domains

Domain Examples Risk Context
Education Admissions scoring, academic tracking Access to educational opportunities
Employment Resume screening, interview scoring, promotion Livelihood and career advancement
Financial Services Credit scoring, loan approval, underwriting Access to capital and financial products
Government Services Benefits eligibility, fraud detection Access to essential public services
Healthcare Diagnosis assistance, treatment recommendations Health outcomes and medical care
Housing Tenant screening, rental approval Access to housing and shelter
Insurance Risk assessment, claims processing, pricing Access to insurance coverage
Legal Services Case outcome prediction, legal research tools Access to justice and legal representation

Notable Exemptions

The Colorado AI Act includes several important exemptions:

Key Definitions

Understanding the Colorado AI Act requires familiarity with four critical terms that structure the law's obligations:

Algorithmic Discrimination

The law defines algorithmic discrimination as any condition in which the use of an AI system results in unlawful differential treatment or impact that disfavors an individual or group on the basis of their actual or perceived:

This definition is critically important because it establishes the harm the law seeks to prevent. Unlike general "bias" or "unfairness," algorithmic discrimination specifically refers to legally protected categories—tying AI governance to existing anti-discrimination law.

Key Legal Distinction

The Colorado AI Act does not prohibit all forms of AI bias or unfair outcomes—only those that result in unlawful discrimination against protected classes. An AI system could produce unequal outcomes based on non-protected characteristics (e.g., credit score, work history) without violating the law, as long as those outcomes don't create disparate impact on protected groups.

Consequential Decision

A consequential decision is any decision that has a material legal or similarly significant effect on the provision or denial to any consumer of:

The phrase "substantial factor in making" is deliberately broad. An AI system need not make the final decision autonomously to qualify as high-risk—it only needs to significantly influence the outcome. This captures AI systems where humans retain final decision authority but rely heavily on AI-generated recommendations.

Developer vs. Deployer

The law creates a two-party framework with distinct obligations:

Developer

A person doing business in Colorado who develops or substantially modifies an AI system. Key questions for determining developer status:

  • Did you design the algorithm or model architecture?
  • Did you train or fine-tune the model?
  • Did you materially alter how a third-party model makes decisions?

Deployer

A person doing business in Colorado who deploys a high-risk AI system. Key indicator: you use the AI system to make or assist in consequential decisions about Colorado consumers. This includes:

  • Employers using resume screening AI
  • Lenders using credit risk models
  • Healthcare providers using diagnostic AI
  • Landlords using tenant screening tools

Developer Requirements

Developers of high-risk AI systems must comply with five core obligations designed to ensure transparency, enable downstream risk management, and facilitate accountability. These requirements take effect June 30, 2026.[2]

1. Duty of Reasonable Care

Developers must use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk AI system.[4]

This standard is deliberately flexible—"reasonable care" is a fact-specific inquiry based on:

2. Documentation and Information Disclosure

Developers must make available to deployers (or other developers) documentation necessary to understand system behavior and assess discrimination risks. Required disclosures include:[4]

  • General statement describing reasonably foreseeable uses and known harmful or inappropriate uses
  • Documentation through artifacts such as model cards, dataset cards, or impact assessments necessary for deployers to complete their own assessments
  • Additional documentation reasonably necessary to help deployers understand system outputs and monitor for discrimination risks
  • Information enabling testing for algorithmic discrimination in specific deployment contexts

This language explicitly references model cards and dataset cards—documentation formats pioneered by researchers at Google and Microsoft to standardize AI transparency. Organizations can leverage existing model card frameworks (e.g., Mitchell et al. 2019) to satisfy these requirements.

3. Public Disclosure of AI Systems

Developers must publicly disclose summaries of high-risk AI systems they offer. This creates a registry-like transparency mechanism allowing researchers, advocates, and regulators to understand the scope of high-risk AI deployment in Colorado.[4]

4. Discrimination Risk Reporting

Developers must disclose to the Colorado Attorney General and known deployers any known or reasonably foreseeable risks of algorithmic discrimination within 90 days after discovery or receipt of a credible report.[5]

This incident reporting obligation is analogous to data breach notification laws. It requires developers to:

5. Impact Assessment Support

Developers must provide deployers with sufficient information to conduct their own impact assessments. This creates a chain of accountability: developers build systems with transparency in mind, deployers assess context-specific risks, and both parties share responsibility for preventing algorithmic discrimination.[4]

Deployer Requirements

Deployers of high-risk AI systems face more extensive obligations than developers, reflecting their direct relationship with affected consumers. Deployer requirements take effect in two phases:[4]

1. Duty of Reasonable Care

Like developers, deployers must use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. For deployers, this means understanding how AI systems behave in their specific use context and implementing safeguards against biased outcomes.[4]

2. Risk Management Policy and Program

Deployers must implement a risk management policy and program governing the deployment of high-risk AI systems. The policy must specify and incorporate:[4]

Risk Management Policy Components

Component Description
Principles Organizational values and commitments regarding AI fairness and non-discrimination
Processes Workflows for identifying, documenting, and mitigating algorithmic discrimination risks
Personnel Roles and responsibilities for AI governance, including executive accountability
Identification Methods Testing, monitoring, and auditing procedures to detect discrimination
Documentation Requirements Record-keeping for risk assessments, testing results, and mitigation actions
Mitigation Measures Remediation strategies when discrimination is detected

This requirement closely mirrors the NIST AI RMF Govern function and the ISO 42001 AI management system approach. Organizations that have implemented these frameworks will find significant overlap with Colorado's requirements.

3. Impact Assessments (Effective February 1, 2027)

Deployers must complete impact assessments for each high-risk AI system before deployment and annually thereafter. The impact assessment must document:[4]

Impact assessments must be provided to the Colorado Attorney General upon request—they are not proactively submitted but must be available for regulatory inspection.

4. Consumer Disclosures (Effective February 1, 2027)

Deployers must provide clear and conspicuous notice to consumers when a high-risk AI system is used to make or substantially inform a consequential decision about them. The notice must include:[4]

  • Purpose and nature of the AI system
  • Types of data collected and how it is used
  • Data sources feeding the AI system
  • Consumer rights including rights to opt out, correct data, and appeal decisions
  • Contact information for inquiries and appeals

5. Management and Oversight

Deployers must designate personnel responsible for implementing the risk management program and must ensure appropriate oversight of high-risk AI system deployment. This includes executive accountability—leadership must be informed of AI-related risks and mitigation efforts.[4]

Consumer Rights

The Colorado AI Act establishes three core rights for consumers affected by high-risk AI systems. These rights take effect February 1, 2027 and create enforceable obligations for deployers.[4]

Right to Meaningful Explanation

Consumers have the right to receive a statement disclosing:

This explanation must be provided in plain language—not technical jargon. For example, a job applicant rejected by an AI screening tool has the right to understand which factors (e.g., employment gaps, keyword matching, assessment scores) most influenced the rejection.

Right to Correct Data

Consumers may request correction of personal data used by the AI system if they believe it is inaccurate. The deployer must:

Right to Appeal

Consumers have the right to appeal adverse consequential decisions. The deployer must:

Importantly, the "human review" requirement means a deployer cannot automatically defer to the AI system's original output during appeals. A qualified human must substantively evaluate the appeal and exercise independent judgment.

Opt-Out Rights Under Colorado Privacy Act

The Colorado AI Act integrates with the existing Colorado Privacy Act (CPA). Consumers have the right to opt out of the processing of personal data for profiling in furtherance of decisions that produce legal or similarly significant effects—which encompasses high-risk AI systems.[6]

Enforcement & Penalties

The Colorado AI Act grants the Attorney General exclusive enforcement authority. There is no private right of action—only the AG can bring enforcement actions for violations.[1]

Enforcement Mechanisms

Violations of the Colorado AI Act are treated as deceptive trade practices under the Colorado Consumer Protection Act. This classification subjects violators to:

The "per violation" structure means penalties can accumulate rapidly. If a deployer fails to provide required disclosures to 1,000 Colorado consumers, each instance could constitute a separate violation—creating potential exposure of $20 million.

60-Day Cure Period

The law includes an important affirmative defense for organizations that discover and cure violations before enforcement. If a developer or deployer:[5]

Then they have an affirmative defense against enforcement actions for that violation. This "self-reporting plus cure" mechanism incentivizes proactive compliance monitoring and rewards good-faith remediation efforts.

Framework Compliance Safe Harbor

Organizations that comply with a nationally or internationally recognized AI risk management framework designated by the Colorado Attorney General benefit from a rebuttable presumption of reasonable care. Frameworks explicitly mentioned include:[3]

This creates a powerful compliance pathway: implement NIST AI RMF or pursue ISO 42001 certification, document your implementation, and establish a rebuttable presumption that you exercised reasonable care to prevent algorithmic discrimination.

Rulemaking Authority

The Attorney General has authority to issue rules implementing the Colorado AI Act. Expected guidance includes:

Comparison to EU AI Act

The Colorado AI Act is often described as "US-style EU AI Act regulation," but meaningful differences exist. Here's a comparative analysis:

Colorado AI Act vs. EU AI Act

Feature Colorado AI Act EU AI Act
Scope High-risk AI in 8 consequential decision domains Four risk tiers: prohibited, high-risk, limited-risk, minimal-risk
Standard of Care "Reasonable care" to prevent algorithmic discrimination Prescriptive technical and organizational requirements
Enforcement State Attorney General only; no private right of action National authorities; potential for private litigation under GDPR-like mechanisms
Penalties Up to $20,000 per violation Up to €35M or 7% global revenue for high-risk violations
Conformity Assessment No third-party certification required Third-party notified body assessment for certain high-risk systems
Focus Area Algorithmic discrimination (protected class bias) Broader safety, transparency, and fundamental rights protection
Safe Harbor Compliance with NIST AI RMF or ISO 42001 creates presumption of reasonable care Voluntary harmonized standards provide presumption of conformity

Key Similarities

Key Differences

Compliance Roadmap

Organizations have 18 months until the June 30, 2026 deadline. Here's a practical implementation roadmap prioritizing evidence generation over documentation theater:

GLACIS Framework

Colorado AI Act Compliance Sprint

1

Inventory & Risk Classification (Weeks 1-3)

Catalog all AI systems used in your organization. Classify each system against the eight high-risk domains. Prioritize systems making consequential decisions in employment, housing, credit, or healthcare. Document whether your organization acts as developer, deployer, or both for each system.

2

Framework Adoption (Weeks 4-8)

Adopt NIST AI RMF or pursue ISO 42001 certification to establish the rebuttable presumption of reasonable care. Map your current practices to framework requirements. Identify gaps in governance structure, testing processes, and documentation.

3

Bias Testing & Evidence Generation (Weeks 9-14)

Implement algorithmic fairness testing for high-risk systems. Test for disparate impact across protected characteristics (race, gender, age, disability). Generate verifiable evidence of testing—not just internal reports but cryptographic attestations that testing occurred and results were reviewed. This addresses the core compliance question: can you prove you tested for discrimination?

4

Documentation & Impact Assessments (Weeks 15-20)

Complete impact assessments for each high-risk system. Create model cards and dataset cards if you're a developer. Draft risk management policies specifying principles, processes, personnel, and mitigation measures. Prepare consumer disclosure templates.

5

Operational Readiness (Weeks 21-24)

Train personnel on new AI governance requirements. Implement consumer disclosure mechanisms (e.g., website notices, application disclosures). Establish appeals and data correction processes. Create internal reporting workflows for discrimination risk discovery.

6

Continuous Monitoring (Post-June 2026)

Deploy production monitoring for algorithmic discrimination indicators. Conduct annual impact assessment updates. Review and update risk management policies as AI systems evolve. Monitor Attorney General guidance for implementation clarifications.

Critical insight: Organizations that wait until Q1 2026 will find themselves unprepared. Algorithmic fairness testing takes months to implement properly—surface-level "bias checks" won't withstand regulatory scrutiny or enforcement actions.

Role-Specific Action Items

For Developers

  • Create model cards documenting intended uses, limitations, and bias testing results
  • Publish summaries of high-risk AI systems offered commercially
  • Establish 90-day discrimination risk reporting procedures
  • Provide deployer-facing documentation enabling context-specific testing

For Deployers

  • Draft and implement risk management policy by June 30, 2026
  • Complete impact assessments for each high-risk system by February 1, 2027
  • Implement consumer disclosure mechanisms by February 1, 2027
  • Establish consumer appeals process with human review capability
  • Designate executive-level accountability for AI governance

Frequently Asked Questions

Does the Colorado AI Act apply to companies headquartered outside Colorado?

Yes. The law applies to any person or entity "doing business in Colorado" that develops or deploys high-risk AI systems. If you serve Colorado residents, make employment decisions affecting Colorado workers, or deploy AI systems impacting Colorado consumers, you must comply—regardless of where your company is headquartered.

What if I'm both a developer and deployer of the same AI system?

You must comply with both sets of requirements. For example, a healthcare system that builds its own diagnostic AI must provide developer-level documentation (model cards, risk disclosures) and comply with deployer requirements (risk management policy, impact assessments, consumer disclosures). Many organizations fall into this dual category.

How does the Colorado AI Act interact with federal laws like Title VII or ECOA?

The Colorado AI Act is in addition to existing federal anti-discrimination laws, not a replacement. An AI system that violates Title VII (employment discrimination) or ECOA (credit discrimination) would also violate Colorado's algorithmic discrimination prohibition. Organizations must comply with both federal baseline requirements and Colorado's AI-specific obligations.

What constitutes “substantial modification” of an AI system?

The statute doesn't define "substantial modification" precisely—expect Attorney General guidance. Generally, basic configuration (setting thresholds, selecting features from a menu) likely doesn't trigger developer obligations, but fine-tuning models, retraining on proprietary datasets, or materially altering decision logic likely does.

Can I rely on vendor assertions that their AI system is compliant?

No. Deployers have independent obligations to exercise reasonable care and conduct impact assessments. While you can consider vendor documentation (and should demand it), you cannot outsource your compliance responsibility. If a vendor's AI system produces algorithmic discrimination in your deployment context, you face enforcement risk as the deployer.

How do I demonstrate “reasonable care” to prevent algorithmic discrimination?

The safest approach: implement NIST AI RMF or pursue ISO 42001 certification. These frameworks provide a rebuttable presumption of reasonable care. Document your testing for bias, monitoring procedures, and mitigation actions. Generate verifiable evidence—not just policies claiming you tested, but cryptographic proof that testing occurred.

References

  1. [1] Colorado General Assembly. "SB24-205 Consumer Protections for Artificial Intelligence." leg.colorado.gov/bills/sb24-205
  2. [2] Akin Gump. "Colorado Postpones Implementation of Colorado AI Act, SB 24-205." akingump.com
  3. [3] National Association of Attorneys General. "A Deep Dive into Colorado's Artificial Intelligence Act." naag.org
  4. [4] Colorado General Assembly. "SENATE BILL 24-205 (Enrolled)." Enrolled Bill PDF
  5. [5] TrustArc. "Complying With Colorado's AI Law: Your SB24-205 Compliance Guide." trustarc.com
  6. [6] Colorado Privacy Act integration provisions in SB 24-205.
  7. [7] European Union. "Regulation (EU) 2024/1689 on Artificial Intelligence (AI Act)." eur-lex.europa.eu

Ready for Colorado AI Act Compliance?

Generate cryptographic evidence of algorithmic fairness testing. Our Evidence Pack demonstrates your AI controls work—mapped to NIST AI RMF, ISO 42001, and Colorado's reasonable care standard.

Build Your Compliance Evidence

Related Guides