Ambient AI Scribe Privacy Read Now
Use Case Classification • HIGH-RISK

Is AI-Assisted Diagnosis High-Risk Under the EU AI Act?

YES — HIGH-RISK Classification

AI diagnostic systems qualify as medical devices under MDR and are therefore automatically high-risk under EU AI Act Article 6(1). Dual regulation applies.

12 min read 2,200+ words
Joe Braidwood
Joe Braidwood
CEO, GLACIS
12 min read

The Quick Answer

AI-assisted diagnostic systems are HIGH-RISK under the EU AI Act. This classification is automatic—not discretionary—because these systems qualify as medical devices under EU MDR 2017/745.

Article 6(1) of the EU AI Act states that AI systems that are safety components of products covered by Union harmonization legislation—including medical devices—are high-risk. There is no exception for "decision support" or "physician oversight" scenarios.

Compliance deadline: August 2, 2027 for AI systems that are medical devices (extended timeline under Article 113(3)(b)).

HIGH
Risk Classification
Aug 2027
Compliance Deadline
Dual
MDR + AI Act
Notified
Body Required

In This Guide

Why AI-Assisted Diagnosis is High-Risk

The EU AI Act establishes a risk-based classification system with four tiers: prohibited, high-risk, limited risk, and minimal risk. AI-assisted diagnostic systems fall squarely into the high-risk category through two independent classification pathways.

Classification Pathway 1: Medical Device Status (Article 6(1))

Article 6(1) of the EU AI Act provides that AI systems are high-risk when they are safety components of products covered by EU harmonization legislation listed in Annex I, and those products require third-party conformity assessment.

The Medical Device Regulation (MDR) 2017/745 is listed in Annex I, Section A of the EU AI Act. AI software intended to diagnose, prevent, monitor, predict, or treat disease meets the MDR definition of a medical device. Most diagnostic AI systems fall into MDR Class IIa or higher, requiring notified body assessment.

This classification is automatic. If the AI system qualifies as a medical device under MDR, it is high-risk under the AI Act. There is no additional analysis required.

Classification Pathway 2: Healthcare Use Case (Annex III)

Even if an AI system somehow avoided MDR classification, Annex III of the EU AI Act explicitly lists AI systems intended for use in healthcare as high-risk use cases. Annex III, point 5(a) specifically covers:

Additionally, Recital 53 clarifies that AI systems used in healthcare settings that may pose risks to health and safety warrant high-risk classification due to the potential for significant harm.

No Exception for "Decision Support"

A common misconception is that AI systems providing "decision support" to physicians rather than making autonomous decisions are exempt from high-risk classification. This is incorrect.

The EU AI Act classification is based on the system’s intended purpose and potential impact, not on whether a human makes the final decision. Human oversight is a requirement for high-risk systems (Article 14), not an exemption from classification. The logic is straightforward: a system that influences life-or-death medical decisions poses high risk regardless of whether a physician formally "approves" its output.

Types of AI-Assisted Diagnosis Covered

The high-risk classification applies broadly across medical specialties. The following AI diagnostic systems are definitively high-risk under the EU AI Act:

Radiology AI

AI systems that analyze medical imaging including X-rays, CT scans, MRI scans, mammograms, and ultrasounds. These systems detect abnormalities, identify suspected malignancies, measure anatomical structures, and prioritize worklists. Examples include lung nodule detection, breast cancer screening AI, and fracture detection systems.

Pathology AI

AI systems that analyze digitized tissue samples and histopathology slides. Applications include cancer grading, tumor margin assessment, biomarker quantification, and automated cell counting. Digital pathology AI is increasingly used for prostate cancer Gleason scoring and breast cancer HER2 analysis.

Dermatology AI

AI systems that analyze skin lesions from dermoscopic images or clinical photographs. These systems assist in melanoma detection, skin cancer screening, and differential diagnosis of skin conditions. Consumer-facing skin analysis apps may also qualify if they make diagnostic claims.

Ophthalmology AI

AI systems analyzing retinal images for diabetic retinopathy screening, age-related macular degeneration detection, glaucoma risk assessment, and other ocular conditions. Fundus photography and OCT analysis systems are commonly deployed in screening programs.

Cardiology AI

AI systems analyzing electrocardiograms (ECGs/EKGs) for arrhythmia detection, atrial fibrillation screening, and cardiac event prediction. Also includes echocardiogram analysis AI and cardiac MRI interpretation systems.

Other Diagnostic Modalities

The classification extends to any AI system intended to assist in diagnosis, including: endoscopy AI (polyp detection), pulmonary function analysis, genomic analysis for disease prediction, clinical decision support systems that suggest diagnoses, and symptom checkers that provide differential diagnoses.

Dual Regulation: MDR + EU AI Act

AI diagnostic systems face a unique regulatory challenge: they must comply with both the Medical Device Regulation and the EU AI Act simultaneously. This is not an either/or situation—both sets of requirements apply in full.

How the Regulations Interact

Article 6(2) of the EU AI Act specifies that for AI systems that are products covered by harmonization legislation (like MDR), the conformity assessment procedure under that legislation serves as the pathway for demonstrating AI Act compliance. However, additional AI-specific requirements still apply.

Requirement Area MDR Obligation AI Act Additional Requirement
Risk Management ISO 14971 risk management process AI-specific risk categories per Article 9
Data Governance Clinical data quality requirements Training data governance per Article 10
Technical Documentation MDR Annex II/III documentation AI-specific documentation per Article 11
Logging Post-market surveillance data Automatic operation logging per Article 12
Human Oversight Instructions for use Specific oversight measures per Article 14
Accuracy/Robustness Clinical performance validation Accuracy, robustness, cybersecurity per Article 15

Conformity Assessment Integration

For AI medical devices, the notified body assessment required under MDR will encompass AI Act requirements. The notified body must verify that both MDR and AI Act obligations are satisfied before issuing the CE mark. This means notified bodies need competence in AI-specific assessment—a capacity gap that many are currently addressing.

High-Risk Requirements (Articles 9–15)

High-risk AI systems must satisfy comprehensive obligations spanning the entire system lifecycle. For AI diagnostic systems, these requirements translate into specific implementation demands:

Article 9: Risk Management System

Establish and maintain a risk management system throughout the AI system’s lifecycle. For diagnostic AI, this includes: identification of known and foreseeable risks (misdiagnosis, algorithmic bias across patient populations, failure modes), estimation and evaluation of risks, adoption of risk mitigation measures, and continuous monitoring of residual risks.

Article 10: Data and Data Governance

Training, validation, and testing datasets must meet quality criteria. For diagnostic AI: data must be representative of the intended patient population, training data provenance must be documented, statistical properties and potential biases must be examined, and appropriate data preparation steps must be applied.

Article 11: Technical Documentation

Comprehensive documentation enabling assessment of AI Act compliance. For diagnostic AI, this includes: detailed algorithm description, training methodology, validation approach, clinical performance data, integration specifications, and instructions for use by healthcare professionals.

Article 12: Record-Keeping (Logging)

Automatic logging of system operation throughout its lifecycle. This is a core area where GLACIS provides value—see the detailed section below.

Article 13: Transparency and Information

AI systems must be designed to enable deployers to interpret outputs and use the system appropriately. For diagnostic AI: clear indication of AI involvement in diagnosis, explanation of confidence levels, disclosure of known limitations, and guidance on appropriate clinical contexts for use.

Article 14: Human Oversight

High-risk AI must be designed for effective human oversight. For diagnostic AI: clinicians must be able to understand AI outputs, override AI recommendations, and intervene in system operation. The system must not induce automation bias or over-reliance.

Article 15: Accuracy, Robustness, Cybersecurity

AI systems must achieve appropriate levels of accuracy, robustness against errors and attacks, and cybersecurity protection. For diagnostic AI: validated clinical accuracy across patient subgroups, robustness to imaging artifacts and edge cases, and protection against adversarial manipulation.

Article 12 Logging Requirements: GLACIS Core Relevance

Article 12 of the EU AI Act establishes mandatory automatic logging requirements for high-risk AI systems. This is one of the most operationally significant requirements—and an area where many AI diagnostic vendors currently have significant gaps.

What Must Be Logged

Article 12(1) requires that high-risk AI systems have logging capabilities that enable:

Practical Logging Requirements for Diagnostic AI

For AI diagnostic systems, Article 12 compliance requires logging:

Retention and Integrity Requirements

Logs must be retained for periods appropriate to the intended purpose—for medical devices, this typically means 10+ years for implantable devices and 5+ years for other medical devices under MDR requirements. Logs must be tamper-proof and available for regulatory audit.

The Compliance Gap

Many AI diagnostic platforms provide only basic access logs (who logged in, when) rather than the inference-level logging required by Article 12. This creates a significant compliance gap. GLACIS addresses this gap by providing continuous attestation and evidence collection that captures AI system operations at the required granularity—generating auditable proof that logging requirements are satisfied.

GLACIS Relevance: Article 12 logging creates an ongoing compliance burden that requires continuous evidence collection—not just point-in-time documentation. GLACIS provides the infrastructure to capture, store, and attest to AI system operations, generating the evidence trail required by regulators.

Notified Body Assessment Requirements

Most AI diagnostic systems require third-party conformity assessment by an EU Notified Body before receiving CE marking and market access.

When Notified Body Assessment is Required

Under MDR, AI diagnostic systems typically fall into Class IIa or higher based on their intended purpose:

All classes above Class I require notified body involvement. For Class IIa, this may be limited to quality management system assessment. For Class IIb and III, full technical documentation review and clinical evidence evaluation is required.

Notified Body Competence for AI

The EU AI Act requires that notified bodies have competence to assess AI-specific requirements. This is an evolving area—many notified bodies are building AI assessment capabilities. Manufacturers should verify that their chosen notified body has relevant AI expertise, particularly for complex machine learning systems.

Assessment Costs and Timeline

Notified body assessment for AI medical devices typically costs €15,000–€100,000+ depending on device class, complexity, and clinical evidence requirements. Assessment timelines range from 6–18 months. These factors should be incorporated into compliance planning.

US Regulatory Comparison: FDA SaMD Guidance

For organizations operating in both the EU and US markets, understanding regulatory differences is essential for efficient compliance strategies.

FDA Approach to AI Diagnostics

The FDA regulates AI diagnostic systems as Software as a Medical Device (SaMD). Unlike the EU’s layered MDR + AI Act approach, the FDA has no separate AI-specific regulation—AI requirements are integrated into existing device regulation pathways:

Emerging FDA AI/ML Guidance

The FDA has issued evolving guidance on AI/ML-based medical devices, including the Predetermined Change Control Plan (PCCP) framework allowing manufacturers to define anticipated algorithm updates that don’t require new submissions. This addresses the challenge of continuously learning AI systems.

Key Differences from EU Approach

Aspect EU (MDR + AI Act) US (FDA)
Regulatory Structure Dual regulation (MDR + AI Act) Single framework (FDA device regulation)
AI-Specific Obligations Explicit (Articles 9–15) Integrated into general requirements
Logging Requirements Mandatory (Article 12) Case-by-case (post-market surveillance)
Third-Party Assessment Required for most diagnostic AI Required for high-risk; many 510(k) exempt
Algorithm Updates Substantial modification triggers reassessment PCCP allows predetermined changes

Evidence Requirements for Regulators

Demonstrating compliance with EU AI Act requirements for diagnostic AI systems requires comprehensive, ongoing evidence—not just point-in-time documentation.

Documentation Categories

The Evidence Chain Challenge

Regulators increasingly expect an unbroken evidence chain from development through deployment. Point-in-time assessments are insufficient—organizations must demonstrate that controls remain effective over time. This requires infrastructure for continuous evidence collection and attestation.

Implementation Checklist

Use this checklist to assess your AI diagnostic system’s readiness for EU AI Act compliance:

Pre-Compliance Assessment

  • Confirm MDR classification (Class IIa/IIb/III) for your AI diagnostic system
  • Identify notified body with AI assessment competence
  • Conduct gap analysis: current state vs. Articles 9–15 requirements
  • Establish compliance timeline working back from August 2027 deadline

Risk Management (Article 9)

  • Document AI-specific risks beyond traditional medical device hazards
  • Assess algorithmic bias across patient populations (age, sex, ethnicity, comorbidities)
  • Define risk mitigation measures for identified AI-specific risks

Data Governance (Article 10)

  • Document training data sources, provenance, and quality controls
  • Conduct representativeness analysis of training data vs. intended population
  • Establish data governance policies for ongoing data quality

Logging Infrastructure (Article 12)

  • Implement inference-level logging (inputs, outputs, confidence, model version)
  • Establish tamper-proof log storage with appropriate retention periods
  • Configure anomaly detection for system behavior deviations
  • Deploy continuous attestation infrastructure (e.g., GLACIS)

Human Oversight (Article 14)

  • Design user interface to support clinician interpretation and override
  • Develop training materials addressing automation bias risks
  • Document intended clinical workflow and oversight mechanisms

Conformity Assessment

  • Engage notified body early—assessment capacity is limited
  • Prepare integrated MDR + AI Act technical documentation
  • Compile clinical performance evidence for target indications
  • Establish quality management system meeting ISO 13485 + AI requirements

Frequently Asked Questions

Our AI only assists—the physician makes the final diagnosis. Is it still high-risk?

Yes. The EU AI Act classifies systems based on intended purpose and potential impact, not decision-making authority. An AI system intended to assist in diagnosis is high-risk because it influences life-or-death clinical decisions. Human oversight is a requirement for high-risk systems, not an exemption from classification.

We already have MDR certification. Do we need to do anything additional for the AI Act?

Yes. While your MDR conformity assessment pathway applies, you must satisfy additional AI-specific requirements under Articles 9–15, particularly Article 10 (data governance), Article 12 (logging), and Article 14 (human oversight). Your notified body assessment will need to cover both MDR and AI Act requirements by the August 2027 deadline.

What if our diagnostic AI doesn’t qualify as a medical device?

If your AI provides information that could influence diagnosis or treatment, it likely qualifies as a medical device under MDR’s broad definition. If it genuinely doesn’t qualify as a medical device (e.g., general wellness applications), it may still be high-risk under Annex III healthcare use cases. Consult regulatory counsel for definitive classification.

How do we handle algorithm updates under the AI Act?

Substantial modifications to AI diagnostic systems may trigger conformity assessment obligations. You should define what constitutes a substantial modification in your quality management system, document change control procedures, and maintain logging that tracks algorithm versions throughout operation. The EU is developing guidance on substantial modification criteria.

What are the penalties for non-compliance?

Non-compliance with high-risk AI system obligations can result in fines up to €15 million or 3% of global annual turnover, whichever is higher. Additionally, market access may be denied or revoked—your CE marking depends on demonstrated AI Act compliance. For medical devices, patient safety issues may trigger recall obligations.

Do US-developed AI diagnostic systems need EU AI Act compliance?

Yes, if the system is placed on the EU market or its outputs are used for patients in the EU. The AI Act applies to providers placing AI systems on the EU market regardless of where the provider is established. US companies selling diagnostic AI in Europe must achieve full compliance.

Next Steps

AI diagnostic systems face a clear regulatory path: high-risk classification under the EU AI Act, dual regulation with MDR, and a compliance deadline of August 2027. Organizations should begin compliance preparation now—notified body capacity is limited, and building the required infrastructure takes time.

Key actions: Confirm your MDR classification, engage with a notified body, conduct an Articles 9–15 gap analysis, and implement Article 12 logging infrastructure. For organizations seeking to streamline evidence collection and continuous attestation, GLACIS provides the compliance infrastructure purpose-built for high-risk AI systems.

Need Help with AI Diagnostic Compliance?

GLACIS provides continuous attestation for high-risk AI systems—including the Article 12 logging infrastructure that diagnostic AI requires.

Related Guides