Back to Blog

EU AI Act Healthcare: What You Need to Know Before August 2026

The clock is ticking: On August 2, 2026, the EU AI Act's high-risk provisions take full effect. If you're a US healthcare AI vendor with European customers—or aspirations—you have less than 20 months to prepare for the most comprehensive AI regulation in history.

Why Healthcare AI Is "High-Risk" by Default

The EU AI Act uses a risk-based classification system. Healthcare AI falls into the "high-risk" category almost automatically because it either:

  • Qualifies as a medical device under EU MDR/IVDR regulations
  • Is intended to assist in medical diagnosis or treatment decisions
  • Affects access to essential healthcare services

If your AI does anything clinical—ambient scribes, clinical decision support, prior auth, diagnostic aids—it's almost certainly high-risk under the Act.

The Key Dates

February 2, 2025

Prohibited AI Systems

Ban on social scoring, real-time biometric surveillance, and emotion recognition in certain contexts takes effect.

August 2, 2025

Governance & General Provisions

National competent authorities must be designated. Governance structures in place.

August 2, 2026

High-Risk AI Systems

Full compliance required for high-risk systems including healthcare AI. This is the critical deadline.

Article 12: The Logging Requirement That Changes Everything

The provision that should keep healthcare AI vendors up at night is Article 12, which requires:

  • Automatic logging of all high-risk AI system operations
  • Traceability throughout the AI system's lifecycle
  • Event recording sufficient to identify the input data, model version, and specific decisions made
  • Logs must be retained for a period appropriate to the intended purpose

The critical distinction: Article 12 doesn't just require that you can log events. It requires automatic logging that enables third-party verification of your AI's behavior. Dashboards and analytics aren't sufficient—you need evidence-grade records.

What "Conformity Assessment" Actually Means

High-risk AI systems must undergo conformity assessment before entering the EU market. For healthcare AI, this typically means either:

  • Self-assessment (if you can demonstrate adherence to harmonized standards)
  • Third-party assessment by a notified body (if your AI is a medical device or doesn't follow harmonized standards)

Either way, you need to demonstrate technical documentation including risk management, data governance, accuracy metrics, and—critically—your logging and traceability systems.

The Five Things You Need to Build

1. Comprehensive Risk Management System

Article 9 requires identification and mitigation of foreseeable risks. For healthcare AI, this means documented processes for handling hallucinations, bias, edge cases, and failure modes.

2. Data Governance Framework

Article 10 requires training data to be relevant, representative, and free from errors. You need documentation of data sources, preprocessing steps, and bias mitigation measures.

3. Automatic Logging Infrastructure

Article 12 requires logs that enable reconstruction of the AI system's behavior. This isn't optional, and post-hoc analytics don't count.

4. Human Oversight Mechanisms

Article 14 requires that high-risk AI systems are designed to be effectively overseen by natural persons. For clinical AI, this means clear human-in-the-loop requirements.

5. Technical Documentation

Article 11 requires extensive documentation including system architecture, algorithm descriptions, validation procedures, and accuracy metrics.

Preparing for EU AI Act Compliance?

Our white paper "The Proof Gap in Healthcare AI" covers the evidence infrastructure you'll need—including Article 12 logging requirements.

Read the White Paper

Why This Affects US Companies

The EU AI Act has extraterritorial reach. It applies to:

  • AI providers established in the EU
  • AI providers placing AI systems on the EU market (even if headquartered elsewhere)
  • Users in the EU whose AI outputs are used in the EU

If you have European healthcare customers, sell through European distributors, or your AI outputs affect EU patients—you're in scope.

The California/Colorado Convergence

Here's the strategic angle: similar requirements are emerging in US state regulations. The Colorado AI Act (effective June 30, 2026) and California's ADMT regulations (effective January 1, 2027) contain overlapping requirements around:

  • Documentation of AI decision-making processes
  • Impact assessments for high-risk uses
  • Consumer disclosure requirements

Building for EU AI Act compliance now positions you for US state compliance later. It's not three separate problems—it's one infrastructure challenge with three regulatory expressions.

The enforcement reality: EU AI Act violations can result in fines up to €35 million or 7% of global annual turnover—whichever is higher. These aren't theoretical penalties. The EU has demonstrated willingness to enforce tech regulations aggressively (see: GDPR fines against Meta, Google, Amazon).

What To Do Now

With less than 20 months until the high-risk deadline:

  • Classify your AI systems under the Act's risk framework. If there's any clinical use, assume high-risk.
  • Audit your logging infrastructure against Article 12 requirements. Can you reconstruct specific decisions?
  • Map your technical documentation gaps against Annex IV requirements.
  • Identify your conformity assessment pathway—self-assessment or notified body.
  • Build the evidence infrastructure that satisfies both EU requirements and emerging US state regulations.

The vendors who treat this as a 2026 problem will find themselves scrambling. The vendors who start now will have compliance as a competitive advantage.

For the complete analysis of what evidence infrastructure looks like, read The Proof Gap in Healthcare AI.