Back to Blog

Colorado AI Act: What Healthcare Vendors Need to Know

First mover: Colorado SB 205 becomes the first comprehensive US state AI law when it takes effect June 30, 2026. If you're selling AI into healthcare—even if you're not headquartered in Colorado—you need to understand what's coming.

June 30, 2026

Colorado AI Act enforcement begins

Who's Covered

The Colorado AI Act applies to "deployers" and "developers" of "high-risk AI systems." For healthcare, this includes:

  • Developers — companies that build or substantially modify AI systems
  • Deployers — companies that use AI systems to make consequential decisions

Healthcare organizations that deploy AI for clinical decisions are deployers. AI vendors selling into healthcare are developers. Both have obligations.

What Makes AI "High-Risk"?

Under the Act, an AI system is high-risk if it makes or substantially informs "consequential decisions." In healthcare, this includes decisions about:

  • Access to healthcare services
  • Cost or terms of healthcare
  • Clinical treatment recommendations
  • Prior authorization or coverage decisions

If your AI touches any of these areas, it's almost certainly high-risk under Colorado law.

Developer Obligations

If you build AI systems (vendors), you must:

1. Provide Documentation

  • General statement describing reasonably foreseeable uses and known limitations
  • Documentation of data used to train the system
  • Known or foreseeable risks of algorithmic discrimination
  • Description of how the system was evaluated for performance and mitigation

2. Disclose Known Risks

Any known or reasonably foreseeable risks that the system may produce discriminatory or otherwise harmful outputs.

3. Enable Compliance

Provide information sufficient to allow deployers to complete impact assessments and meet their own obligations.

The documentation challenge: Most AI vendors don't currently have the infrastructure to produce this documentation at inference-level. Aggregate statistics won't satisfy the Act's requirements for specific decision documentation.

Deployer Obligations

If you use high-risk AI systems (healthcare organizations), you must:

1. Risk Management Policy

Implement a risk management policy governing your use of high-risk AI systems.

2. Impact Assessments

Complete and document impact assessments before deploying high-risk AI, including:

  • Purpose and intended use of the system
  • Analysis of whether the system poses risks of algorithmic discrimination
  • Categories of data processed and outputs produced
  • Oversight and human review processes

3. Consumer Disclosure

Notify consumers when AI is making or substantially informing consequential decisions about them.

4. Appeal Rights

Provide consumers a way to appeal adverse AI decisions.

The Algorithmic Discrimination Focus

A central concern of the Act is "algorithmic discrimination"—AI systems that produce outputs that unlawfully discriminate against individuals based on protected characteristics. For healthcare AI, this means:

  • Testing for disparate outcomes across demographic groups
  • Documenting mitigation measures
  • Ongoing monitoring for discriminatory patterns

Enforcement and Penalties

The Colorado Attorney General has exclusive enforcement authority. Violations can result in:

  • Injunctive relief (ordered to stop using the AI system)
  • Civil penalties under the Colorado Consumer Protection Act
  • Investigation and audit authority

There's also an "affirmative defense" for developers and deployers who discover and cure violations within 90 days—but only if you have the monitoring infrastructure to detect problems.

Building for State AI Regulations

Our white paper "The Proof Gap in Healthcare AI" covers the evidence infrastructure you need—applicable to Colorado, California, and EU AI Act requirements.

Read the White Paper

The Documentation Gap

Here's the challenge: Colorado requires documentation that most AI systems can't currently produce:

  • Per-decision documentation — not aggregate statistics, but records of specific decisions
  • Bias testing evidence — proof that you tested for discrimination, not just that you planned to
  • Human oversight records — documentation of when and how humans reviewed AI outputs
  • Risk mitigation evidence — proof that safeguards actually executed

Traditional compliance approaches—annual audits, policy documents, aggregate metrics—won't satisfy these requirements. You need inference-level evidence infrastructure.

Why This Matters Beyond Colorado

Colorado is the first mover, but not the last:

  • California ADMT Regulations take effect January 1, 2027
  • EU AI Act high-risk provisions take effect August 2, 2026
  • Other states are watching Colorado's implementation

Building for Colorado compliance now positions you for the regulatory wave coming across multiple jurisdictions. It's not three separate problems—it's one evidence infrastructure challenge.

What to Do Now

With 18 months until enforcement:

  • Classify your AI systems under the Act's high-risk definition
  • Audit your documentation capabilities — can you produce what the Act requires?
  • Evaluate your bias testing — do you have evidence of discrimination testing?
  • Build evidence infrastructure — inference-level logging, guardrail attestation, human oversight documentation
  • Plan for consumer disclosure — how will you notify patients about AI involvement?

For the complete framework on AI evidence infrastructure, read our white paper.