GLACIS·US state AI laws·Colorado·Updated 5 May 2026

Colorado AI Act, the working playbook through the May 2026 inflection.

SB 24-205 in plain English — and the two events that reshaped the planning calendar this spring: the federal-court stay in xAI v. Weiser (April 28) and the Polis-backed replacement bill SB 26-189 (introduced May 1). What reasonable care looks like under the original Act, what changes if the disclosure-regime replacement passes, and why NIST AI RMF and ISO/IEC 42001 alignment is the most defensible posture across both paths.

By Joe Braidwood, CEO GLACIS·22 min read·Updated 5 May 2026

May 17 2024
Gov. Polis signs SB 24-205
Aug 28 2025
SB 25B-004 delays effective date to June 30, 2026
Apr 9 2026
xAI files xAI v. Weiser in D. Colo.; DOJ later intervenes
Apr 28 2026
Mag. J. Chung stays enforcement; AG announces no rulemaking until session closes
May 1 2026
SB 26-189 introduced — Polis-backed replacement bill
May 13 2026
Colorado legislature adjourns sine die
Jun 30 2026
SB 24-205 obligations enforceable if no replacement passes
Jan 1 2027
SB 26-189 effective date if enacted
Joe Braidwood
Joe Braidwood
CEO, GLACIS
18 min read
May 2026 update brief

Enforcement is stayed. On April 28, 2026, U.S. Magistrate Judge Cyrus Y. Chung (D. Colo., 1:26cv1515) granted a joint motion in xAI v. Weiser: AG Phil Weiser will not pursue alleged violations through the 14th day after the court rules on xAI’s preliminary-injunction motion. xAI filed April 9, 2026 alleging First Amendment, Commerce Clause, Due Process and Equal Protection violations; the U.S. Department of Justice has intervened on xAI’s side — the Trump administration’s first litigation move under the December 2025 preemption executive order.[CO5][CO6][CO7]

SB 26-189 is the Polis-backed replacement bill. Introduced the week of May 1, 2026 and the product of Governor Polis’s AI Policy Work Group. Sponsors: Senate Majority Leader Robert Rodriguez, Senate President James Coleman, House Majority Leader Monica Duran, and Asst. House Majority Leader Jennifer Bacon. Cleared Senate Business, Labor & Technology 8–1; pending in Senate Appropriations as of May 4. The bill replaces the duty-of-care framework with a disclosure-and-consumer-rights regime: it drops mandatory impact assessments, drops the mandatory risk-management policy, drops the rebuttable-presumption framework, raises the threshold from “substantial factor” to “materially influence,” removes legal services from covered decisions, and pushes the effective date to January 1, 2027. Colorado adjourns sine die May 13.[CO8][CO9][CO10]

AG rulemaking is on hold. On April 24, 2026, the Colorado Attorney General’s Office stated it “does not intend to promulgate rules implementing SB24-205 or any legislation replacing or amending SB24-205 until the legislative session concludes,” and “does not intend to enforce SB24-205 or any legislation replacing or amending SB24-205 until after the rulemaking process has concluded.” The pre-rulemaking comment window at coag.gov/ai closed in 2024; the formal docket is paused.[CO3][CO5]

What this means for compliance planning. Two scenarios are live, and the planning answer is the same in both. If SB 26-189 passes, the Act is replaced by a disclosure-and-recordkeeping regime effective January 1, 2027 — you still need a verifiable record of what your AI systems did. If the bill stalls, SB 24-205 takes effect June 30, 2026 with risk-management policy, impact assessments, consumer disclosure and developer documentation intact — and NIST AI RMF / ISO 42001 alignment remains the most defensible path to the rebuttable presumption. Build the evidence layer now; let the statute settle around it.

Executive summary

On May 17, 2024, Colorado Governor Jared Polis signed SB 24-205, making Colorado the first US state to enact comprehensive legislation regulating AI systems used in consequential decisions. The Act’s effective date was postponed once already (from Feb 1, 2026 to June 30, 2026) by SB 25B-004, and as of May 2026 it sits at a second inflection: enforcement is stayed by federal-court order, and a Polis-backed replacement bill is in committee.[1][2][CO5][CO8]

As enacted, the Colorado AI Act targets algorithmic discrimination — AI-driven bias in employment, housing, credit, healthcare, education, insurance, government services, and legal services. It requires “reasonable care” and provides a rebuttable presumption for organizations following recognized risk-management frameworks like NIST AI RMF and ISO/IEC 42001.[3] The replacement, SB 26-189, would pivot to a disclosure-and-recordkeeping regime, drop mandatory impact assessments and the rebuttable-presumption framework, raise the “substantial factor” threshold to “materially influence,” remove legal services from coverage, and push the effective date to January 1, 2027.[CO8][CO9][CO10]

Key takeaway: Either statute leaves you needing the same artifact — a verifiable record of how your AI system behaved when it touched a consumer. Penalties under the original Act run up to $20,000 per violation under the Colorado Consumer Protection Act, with AG-exclusive enforcement and a 60-day cure period; SB 26-189 keeps that AG-exclusive structure but shifts remedies toward disclosure-failure penalties. Build the evidence layer now; let the statute settle around it.

Jun 30, 2026
Effective Date[2]
$20K
Max Penalty/Violation[1]
1st State
Comprehensive AI Law[3]
8 Domains
High-Risk Categories[4]

In This Guide

What is the Colorado AI Act?

The Colorado Artificial Intelligence Act (SB 24-205, codified in C.R.S. § 6-1-1701 et seq.) was signed by Governor Polis on May 17, 2024. It is the first comprehensive US state law regulating AI used in “consequential decisions” about people’s lives. Originally scheduled to take effect February 1, 2026, the effective date was postponed by SB 25B-004 to June 30, 2026.[CO1][CO2]

The Colorado Artificial Intelligence Act (SB 24-205), signed into law on May 17, 2024, represents the first comprehensive state-level AI regulation in the United States. While other states have passed targeted AI bills addressing specific use cases (like Illinois’ biometric privacy law or New York City’s automated employment decision tool law), Colorado’s legislation establishes broad requirements governing AI systems across multiple high-stakes domains.[1][3]

The law is modeled conceptually on the EU AI Act but adapted to US legal frameworks. Rather than the EU’s risk-tiered classification system with prohibited uses, limited-risk categories, and extensive compliance obligations, Colorado takes a more streamlined approach: it identifies "high-risk" AI systems based on their use in consequential decisions and requires both developers and deployers to exercise "reasonable care" to prevent algorithmic discrimination.[4]

Legislative and Litigation Timeline

  • May 17, 2024: Governor Polis signs SB 24-205 into law
  • August 28, 2025: Governor Polis signs SB 25B-004 during the August special session, postponing the effective date from Feb 1, 2026 to June 30, 2026[CO1][CO2]
  • December 11, 2025: President Trump signs “Eliminating State Law Obstruction of National AI Policy” executive order; SB 24-205 named as a priority target[CO4]
  • January 10, 2026: DOJ AI Litigation Task Force becomes operative[CO4]
  • March 17, 2026: Colorado AI Policy Work Group releases proposed framework to replace SB 24-205[CO9]
  • April 9, 2026: xAI files xAI v. Weiser (1:26cv1515, D. Colo.) seeking to enjoin SB 24-205 on First Amendment, Commerce Clause, Due Process and Equal Protection grounds[CO6][CO7]
  • April 24, 2026: Colorado AG files joint motion announcing it will not promulgate rules or enforce until session closes and rulemaking concludes; DOJ separately intervenes in support of xAI[CO5][CO7]
  • April 28, 2026: Mag. Judge Cyrus Y. Chung grants joint motion staying enforcement through the 14th day after the court rules on xAI’s preliminary-injunction motion[CO5]
  • May 1, 2026: SB 26-189 introduced in Colorado Senate; cleared Business, Labor & Technology 8–1; pending in Senate Appropriations[CO8][CO10]
  • May 13, 2026: Colorado General Assembly adjourns sine die. Final window for SB 26-189 to pass before SB 24-205’s June 30 trigger.
  • June 30, 2026: SB 24-205 substantive obligations enforceable if no replacement passes — subject to the federal-court stay[2][4]
  • January 1, 2027: SB 26-189 effective date if enacted[CO9]

The May 2026 picture is a regulatory window, not a substantive reset. Either the original duty-of-care Act takes effect on June 30 (modulated by the federal-court stay and any replacement legislation), or SB 26-189 supersedes it with a disclosure-and-recordkeeping regime effective January 1, 2027. The substantive obligations described in this guide remain the working planning baseline; the section on SB 26-189 below identifies which obligations would change if the replacement passes.

Scope and applicability

The Colorado AI Act applies to any person or entity "doing business in Colorado" that develops or deploys high-risk AI systems. This broad jurisdictional language means that organizations headquartered outside Colorado must comply if they serve Colorado residents or make AI-driven decisions affecting them.[1]

Who Must Comply

The law establishes two distinct regulated parties with different obligations:

Developers

Persons doing business in Colorado who develop or substantially modify an AI system. This includes foundation model providers, algorithm developers, and companies that customize third-party AI systems beyond basic configuration.

Deployers

Persons doing business in Colorado who deploy a high-risk AI system. This includes employers using AI in hiring, lenders using AI in credit decisions, landlords using tenant screening tools, and healthcare providers using clinical AI.

Important note: An organization can be both a developer and a deployer. For example, a healthcare system that builds its own clinical decision support AI and deploys it internally must comply with both sets of requirements.

What Qualifies as High-Risk

An AI system becomes "high-risk" when it is deployed to make, or is a substantial factor in making, a consequential decision. The law defines consequential decisions as those with a "material legal or similarly significant effect" on consumers in eight domains:[4]

High-Risk AI Domains

Domain Examples Risk Context
Education Admissions scoring, academic tracking Access to educational opportunities
Employment Resume screening, interview scoring, promotion Livelihood and career advancement
Financial Services Credit scoring, loan approval, underwriting Access to capital and financial products
Government Services Benefits eligibility, fraud detection Access to essential public services
Healthcare Diagnosis assistance, treatment recommendations Health outcomes and medical care
Housing Tenant screening, rental approval Access to housing and shelter
Insurance Risk assessment, claims processing, pricing Access to insurance coverage
Legal Services Case outcome prediction, legal research tools Access to justice and legal representation

Notable Exemptions

The Colorado AI Act includes several important exemptions:

Key definitions

Understanding the Colorado AI Act requires familiarity with four critical terms that structure the law’s obligations:

Algorithmic Discrimination

The law defines algorithmic discrimination as any condition in which the use of an AI system results in unlawful differential treatment or impact that disfavors an individual or group on the basis of their actual or perceived:

This definition is critically important because it establishes the harm the law seeks to prevent. Unlike general "bias" or "unfairness," algorithmic discrimination specifically refers to legally protected categories—tying AI governance to existing anti-discrimination law.

Key Legal Distinction

The Colorado AI Act does not prohibit all forms of AI bias or unfair outcomes—only those that result in unlawful discrimination against protected classes. An AI system could produce unequal outcomes based on non-protected characteristics (e.g., credit score, work history) without violating the law, as long as those outcomes don’t create disparate impact on protected groups.

Consequential Decision

A consequential decision is any decision that has a material legal or similarly significant effect on the provision or denial to any consumer of:

The phrase "substantial factor in making" is deliberately broad. An AI system need not make the final decision autonomously to qualify as high-risk—it only needs to significantly influence the outcome. This captures AI systems where humans retain final decision authority but rely heavily on AI-generated recommendations.

Developer vs. Deployer

The law creates a two-party framework with distinct obligations:

Developer

A person doing business in Colorado who develops or substantially modifies an AI system. Key questions for determining developer status:

  • Did you design the algorithm or model architecture?
  • Did you train or fine-tune the model?
  • Did you materially alter how a third-party model makes decisions?

Deployer

A person doing business in Colorado who deploys a high-risk AI system. Key indicator: you use the AI system to make or assist in consequential decisions about Colorado consumers. This includes:

  • Employers using resume screening AI
  • Lenders using credit risk models
  • Healthcare providers using diagnostic AI
  • Landlords using tenant screening tools

Developer requirements

Developers of high-risk AI systems must comply with five core obligations designed to ensure transparency, enable downstream risk management, and facilitate accountability. These requirements take effect June 30, 2026.[2]

1. Duty of Reasonable Care

Developers must use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk AI system.[4]

This standard is deliberately flexible—"reasonable care" is a fact-specific inquiry based on:

2. Documentation and Information Disclosure

Developers must make available to deployers (or other developers) documentation necessary to understand system behavior and assess discrimination risks. Required disclosures include:[4]

  • General statement describing reasonably foreseeable uses and known harmful or inappropriate uses
  • Documentation through artifacts such as model cards, dataset cards, or impact assessments necessary for deployers to complete their own assessments
  • Additional documentation reasonably necessary to help deployers understand system outputs and monitor for discrimination risks
  • Information enabling testing for algorithmic discrimination in specific deployment contexts

This language explicitly references model cards and dataset cards—documentation formats pioneered by researchers at Google and Microsoft to standardize AI transparency. Organizations can leverage existing model card frameworks (e.g., Mitchell et al. 2019) to satisfy these requirements.

3. Public Disclosure of AI Systems

Developers must publicly disclose summaries of high-risk AI systems they offer. This creates a registry-like transparency mechanism allowing researchers, advocates, and regulators to understand the scope of high-risk AI deployment in Colorado.[4]

4. Discrimination Risk Reporting

Developers must disclose to the Colorado Attorney General and known deployers any known or reasonably foreseeable risks of algorithmic discrimination within 90 days after discovery or receipt of a credible report.[5]

This incident reporting obligation is analogous to data breach notification laws. It requires developers to:

5. Impact Assessment Support

Developers must provide deployers with sufficient information to conduct their own impact assessments. This creates a chain of accountability: developers build systems with transparency in mind, deployers assess context-specific risks, and both parties share responsibility for preventing algorithmic discrimination.[4]

Deployer requirements

Deployers of high-risk AI systems face more extensive obligations than developers, reflecting their direct relationship with affected consumers. Under SB 25B-004, all deployer requirements—including impact assessments and consumer disclosures—take effect on a single date:[2][4]

1. Duty of Reasonable Care

Like developers, deployers must use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. For deployers, this means understanding how AI systems behave in their specific use context and implementing safeguards against biased outcomes.[4]

2. Risk Management Policy and Program

Deployers must implement a risk management policy and program governing the deployment of high-risk AI systems. The policy must specify and incorporate:[4]

Risk Management Policy Components

Component Description
Principles Organizational values and commitments regarding AI fairness and non-discrimination
Processes Workflows for identifying, documenting, and mitigating algorithmic discrimination risks
Personnel Roles and responsibilities for AI governance, including executive accountability
Identification Methods Testing, monitoring, and auditing procedures to detect discrimination
Documentation Requirements Record-keeping for risk assessments, testing results, and mitigation actions
Mitigation Measures Remediation strategies when discrimination is detected

This requirement closely mirrors the NIST AI RMF Govern function and the ISO 42001 AI management system approach. Organizations that have implemented these frameworks will find significant overlap with Colorado’s requirements.

3. Impact Assessments (Effective June 30, 2026)

Deployers must complete impact assessments for each high-risk AI system before deployment and annually thereafter. The impact assessment must document:[4]

Impact assessments must be provided to the Colorado Attorney General upon request—they are not proactively submitted but must be available for regulatory inspection.

4. Consumer Disclosures (Effective June 30, 2026)

Deployers must provide clear and conspicuous notice to consumers when a high-risk AI system is used to make or substantially inform a consequential decision about them. The notice must include:[4]

  • Purpose and nature of the AI system
  • Types of data collected and how it is used
  • Data sources feeding the AI system
  • Consumer rights including rights to opt out, correct data, and appeal decisions
  • Contact information for inquiries and appeals

5. Management and Oversight

Deployers must designate personnel responsible for implementing the risk management program and must ensure appropriate oversight of high-risk AI system deployment. This includes executive accountability—leadership must be informed of AI-related risks and mitigation efforts.[4]

Consumer rights

The Colorado AI Act establishes three core rights for consumers affected by high-risk AI systems. These rights take effect June 30, 2026 and create enforceable obligations for deployers.[2][4]

Right to Meaningful Explanation

Consumers have the right to receive a statement disclosing:

This explanation must be provided in plain language—not technical jargon. For example, a job applicant rejected by an AI screening tool has the right to understand which factors (e.g., employment gaps, keyword matching, assessment scores) most influenced the rejection.

Right to Correct Data

Consumers may request correction of personal data used by the AI system if they believe it is inaccurate. The deployer must:

Right to Appeal

Consumers have the right to appeal adverse consequential decisions. The deployer must:

Importantly, the "human review" requirement means a deployer cannot automatically defer to the AI system’s original output during appeals. A qualified human must substantively evaluate the appeal and exercise independent judgment.

Opt-Out Rights Under Colorado Privacy Act

The Colorado AI Act integrates with the existing Colorado Privacy Act (CPA). Consumers have the right to opt out of the processing of personal data for profiling in furtherance of decisions that produce legal or similarly significant effects—which encompasses high-risk AI systems.[6]

Enforcement and penalties

The Colorado AI Act grants the Attorney General exclusive enforcement authority. There is no private right of action—only the AG can bring enforcement actions for violations.[1]

Enforcement Mechanisms

Violations of the Colorado AI Act are treated as deceptive trade practices under the Colorado Consumer Protection Act. This classification subjects violators to:

The "per violation" structure means penalties can accumulate rapidly. If a deployer fails to provide required disclosures to 1,000 Colorado consumers, each instance could constitute a separate violation—creating potential exposure of $20 million.

60-Day Cure Period

The law includes an important affirmative defense for organizations that discover and cure violations before enforcement. If a developer or deployer:[5]

Then they have an affirmative defense against enforcement actions for that violation. This "self-reporting plus cure" mechanism incentivizes proactive compliance monitoring and rewards good-faith remediation efforts.

Framework Compliance Safe Harbor

Organizations that comply with a nationally or internationally recognized AI risk management framework designated by the Colorado Attorney General benefit from a rebuttable presumption of reasonable care. Frameworks explicitly mentioned include:[3]

This creates a powerful compliance pathway: implement NIST AI RMF or pursue ISO 42001 certification, document your implementation, and establish a rebuttable presumption that you exercised reasonable care to prevent algorithmic discrimination.

Rulemaking Authority

The Attorney General has authority to issue rules implementing the Colorado AI Act. Expected guidance includes:

Comparison to the EU AI Act

The Colorado AI Act is often described as "US-style EU AI Act regulation," but meaningful differences exist. Here’s a comparative analysis:

Colorado AI Act vs. EU AI Act

Feature Colorado AI Act EU AI Act
Scope High-risk AI in 8 consequential decision domains Four risk tiers: prohibited, high-risk, limited-risk, minimal-risk
Standard of Care "Reasonable care" to prevent algorithmic discrimination Prescriptive technical and organizational requirements
Enforcement State Attorney General only; no private right of action National authorities; potential for private litigation under GDPR-like mechanisms
Penalties Up to $20,000 per violation Up to €35M or 7% global revenue for high-risk violations
Conformity Assessment No third-party certification required Third-party notified body assessment for certain high-risk systems
Focus Area Algorithmic discrimination (protected class bias) Broader safety, transparency, and fundamental rights protection
Safe Harbor Compliance with NIST AI RMF or ISO 42001 creates presumption of reasonable care Voluntary harmonized standards provide presumption of conformity

Key Similarities

Key Differences

For healthcare vendors

Healthcare AI teams have specific statutory hooks under the Colorado AI Act. An AI system is more likely to be high-risk when it makes, or is a substantial factor in making, consequential decisions about:

Whether a given clinical workflow qualifies turns on the facts. The closer the system is to access, cost, coverage, or other consequential decisions, the stronger the case for treating it as high-risk. Vendors building or substantially modifying these systems act as developers; health systems and payers using them to make consequential decisions act as deployers. Many organizations will be both.

The algorithmic-discrimination focus lands heavily on healthcare. In practice, teams address that risk through testing and review designed to surface materially different outcomes across groups, documenting mitigation steps and governance decisions, and monitoring for discriminatory patterns post-deployment. The documentation isn’t the compliance artifact — the evidence that testing occurred and results were reviewed is.

For background on how EU AI Act, Colorado, and California rules overlap for healthcare teams, see our EU AI Act guide and HIPAA-compliant AI guide.

Compliance roadmap

The May 2026 picture leaves two viable scenarios: SB 24-205 enforceable on June 30 (subject to the federal-court stay) or SB 26-189 superseding it on January 1, 2027. The roadmap below is sequenced for the earlier date and is durable against the later one — every artifact survives the pivot to a disclosure-and-recordkeeping regime. The principle is the same either way: prioritize evidence generation over documentation theater.

GLACIS logoGLACIS
GLACIS Framework

Colorado AI Act Compliance Sprint

1

Inventory & Risk Classification (Weeks 1-3)

Catalog all AI systems used in your organization. Classify each system against the eight high-risk domains. Prioritize systems making consequential decisions in employment, housing, credit, or healthcare. Document whether your organization acts as developer, deployer, or both for each system.

2

Framework Adoption (Weeks 4-8)

Adopt NIST AI RMF or pursue ISO 42001 certification to establish the rebuttable presumption of reasonable care. Map your current practices to framework requirements. Identify gaps in governance structure, testing processes, and documentation.

3

Bias Testing & Evidence Generation (Weeks 9-14)

Implement algorithmic fairness testing for high-risk systems. Test for disparate impact across protected characteristics (race, gender, age, disability). Generate verifiable evidence of testing—not just internal reports but cryptographic attestations that testing occurred and results were reviewed. This addresses the core compliance question: can you prove you tested for discrimination?

4

Documentation & Impact Assessments (Weeks 15-20)

Complete impact assessments for each high-risk system. Create model cards and dataset cards if you’re a developer. Draft risk management policies specifying principles, processes, personnel, and mitigation measures. Prepare consumer disclosure templates.

5

Operational Readiness (Weeks 21-24)

Train personnel on new AI governance requirements. Implement consumer disclosure mechanisms (e.g., website notices, application disclosures). Establish appeals and data correction processes. Create internal reporting workflows for discrimination risk discovery.

6

Continuous Monitoring (Post-June 2026)

Deploy production monitoring for algorithmic discrimination indicators. Conduct annual impact assessment updates. Review and update risk management policies as AI systems evolve. Monitor Attorney General guidance for implementation clarifications.

Critical insight: The April 2026 enforcement stay and the May 1 introduction of SB 26-189 don’t reset the work — they reset the deadline. Algorithmic fairness testing and the runtime evidence layer take months to implement well, and every artifact in this roadmap is durable across both statutes. Surface-level “bias checks” and policy-PDF compliance won’t withstand AG scrutiny or enterprise procurement reviews under either regime.

Role-Specific Action Items

For Developers

  • Create model cards documenting intended uses, limitations, and bias testing results
  • Publish summaries of high-risk AI systems offered commercially
  • Establish 90-day discrimination risk reporting procedures
  • Provide deployer-facing documentation enabling context-specific testing

For Deployers

  • Draft and implement risk management policy by June 30, 2026
  • Complete impact assessments for each high-risk system by June 30, 2026
  • Implement consumer disclosure mechanisms by June 30, 2026
  • Establish consumer appeals process with human review capability
  • Designate executive-level accountability for AI governance

Frequently asked questions

Does the Colorado AI Act apply to companies headquartered outside Colorado?

Yes. The law applies to any person or entity "doing business in Colorado" that develops or deploys high-risk AI systems. If you serve Colorado residents, make employment decisions affecting Colorado workers, or deploy AI systems impacting Colorado consumers, you must comply—regardless of where your company is headquartered.

What if I’m both a developer and deployer of the same AI system?

You must comply with both sets of requirements. For example, a healthcare system that builds its own diagnostic AI must provide developer-level documentation (model cards, risk disclosures) and comply with deployer requirements (risk management policy, impact assessments, consumer disclosures). Many organizations fall into this dual category.

How does the Colorado AI Act interact with federal laws like Title VII or ECOA?

The Colorado AI Act is in addition to existing federal anti-discrimination laws, not a replacement. An AI system that violates Title VII (employment discrimination) or ECOA (credit discrimination) would also violate Colorado’s algorithmic discrimination prohibition. Organizations must comply with both federal baseline requirements and Colorado’s AI-specific obligations.

What constitutes “substantial modification” of an AI system?

The statute doesn’t define "substantial modification" precisely—expect Attorney General guidance. Generally, basic configuration (setting thresholds, selecting features from a menu) likely doesn’t trigger developer obligations, but fine-tuning models, retraining on proprietary datasets, or materially altering decision logic likely does.

Can I rely on vendor assertions that their AI system is compliant?

No. Deployers have independent obligations to exercise reasonable care and conduct impact assessments. While you can consider vendor documentation (and should demand it), you cannot outsource your compliance responsibility. If a vendor’s AI system produces algorithmic discrimination in your deployment context, you face enforcement risk as the deployer.

How do I demonstrate “reasonable care” to prevent algorithmic discrimination?

The safest approach: implement NIST AI RMF or pursue ISO 42001 certification. These frameworks provide a rebuttable presumption of reasonable care. Document your testing for bias, monitoring procedures, and mitigation actions. Generate verifiable evidence—not just policies claiming you tested, but cryptographic proof that testing occurred.

References

  1. [1] Colorado General Assembly. "SB24-205 Consumer Protections for Artificial Intelligence." leg.colorado.gov/bills/sb24-205
  2. [2] Akin Gump. "Colorado Postpones Implementation of Colorado AI Act, SB 24-205." akingump.com
  3. [3] National Association of Attorneys General. "A Deep Dive into Colorado’s Artificial Intelligence Act." naag.org
  4. [4] Colorado General Assembly. "SENATE BILL 24-205 (Enrolled)." Enrolled Bill PDF
  5. [5] TrustArc. "Complying With Colorado’s AI Law: Your SB24-205 Compliance Guide." trustarc.com
  6. [6] Colorado Privacy Act integration provisions in SB 24-205.
  7. [7] European Union. "Regulation (EU) 2024/1689 on Artificial Intelligence (AI Act)." eur-lex.europa.eu
  8. [CO1] Colorado General Assembly, “SB25B-004 Increase Transparency for Algorithmic Systems” leg.colorado.gov (signed Aug 28, 2025).
  9. [CO2] Baker Botts, “Colorado AI Act Implementation Delayed” (Sept 2025); Greenberg Traurig (Sept 2025); American Bar Association practice note (Nov 2025).
  10. [CO3] Colorado Attorney General, ADAI rulemaking docket — coag.gov/ai (accessed May 2026; rulemaking paused).
  11. [CO4] White House, “Eliminating State Law Obstruction of National Artificial Intelligence Policy” (Dec 11, 2025) — whitehouse.gov; Gibson Dunn analysis; National Association of Attorneys General coalition statement (Mar 2026).
  12. [CO5] Troutman Pepper Locke, “Colorado Attorney General Delays Enforcement of Colorado AI Act” — troutmanprivacy.com (April 2026); xAI v. Weiser, No. 1:26cv1515 (D. Colo.), order of Mag. J. Cyrus Y. Chung (Apr 28, 2026).
  13. [CO6] Bloomberg Law, “Colorado AI Bias Law Paused as Musk’s xAI Seeks Injunction” — news.bloomberglaw.com; Colorado Politics, “Colorado’s unprecedented AI law can’t be enforced yet, judge rules” (Apr 28, 2026).
  14. [CO7] Jenner & Block, “DOJ Joins xAI in Lawsuit Challenging Colorado AI Act” — jenner.com; U.S. Department of Justice, Civil Rights Division, Complaint in Intervention (April 2026).
  15. [CO8] Colorado Senate, SB 26-189 (introduced May 1, 2026; cleared Senate Business, Labor & Technology 8–1; pending Senate Appropriations); Axios Denver, “Colorado lawmakers introduce new AI rules” (May 3, 2026); Colorado Sun, “Colorado’s AI compromise would drop requirement that companies explain how their technology works” (May 1, 2026).
  16. [CO9] Mayer Brown, “The Colorado AI Policy Work Group Proposes an Updated Framework to Replace the Colorado AI Act” — mayerbrown.com (Mar 2026); Proskauer, “Colorado Takes a Major Step Towards Rewriting Its AI Law” (Apr 2026).
  17. [CO10] Troutman Pepper Locke, “Proposed State AI Law Update: May 4, 2026” — troutmanprivacy.com.

Colorado AI Act

Make the receipts. Book the Sprint.

The Glacis Agent Runtime Security & Evidence Sprint produces signed evidence receipts of algorithmic fairness testing, mapped to NIST AI RMF, ISO/IEC 42001, and Colorado’s reasonable-care standard — the kind of evidence an AG cure letter would expect. Runtime controls run inside your infrastructure with zero sensitive-data egress.

Book the Agent Runtime Security Sprint See a sample evidence pack →

Related Guides

Get started

Start with one high‑risk AI workflow.

Book a focused Agent Runtime Security & Evidence Sprint, then deploy runtime assurance where the risk is real.

From assessment to platform deployment. See pricing →