Ambient AI Scribe Privacy Read Now
Use Case Classification • January 2026

Is Ambient AI Scribe High-Risk Under EU AI Act?

Definitive classification guide for clinical documentation AI. Annex III analysis, determining factors, edge cases, and compliance requirements.

12 min read 2,200+ words
Joe Braidwood
Joe Braidwood
CEO, GLACIS
12 min read

Quick Answer: IT DEPENDS

Pure documentation scribes that only transcribe and summarize clinical conversations are generally NOT high-risk. However, ambient AI systems become HIGH-RISK when they:

The Litigation Has Begun: Sharp HealthCare (2025)

In November 2025, a proposed class action was filed against Sharp HealthCare. The lawsuit alleges their ambient AI scribe recorded an estimated 100,000+ patients without proper consent and that false consent statements appeared in medical records.[1]

The question isn't whether your AI vendor has policies. It's whether you can prove those policies executed when the plaintiff's attorney asks for evidence during discovery. Classification matters because high-risk AI systems require Article 12 logging—exactly the evidence you'll need.

In This Guide

Annex III Category Analysis

The EU AI Act classifies AI systems as high-risk through two pathways: (1) AI systems that are safety components of products requiring third-party conformity assessment under existing EU harmonization legislation (Article 6(1)), or (2) AI systems listed in Annex III covering specific use cases (Article 6(2)).

For ambient AI scribes, two Annex III categories require careful analysis:

Annex III, Category 5(a): Healthcare Safety Components

"AI systems intended to be used as safety components in the management and operation of... healthcare."

This category captures AI systems that, while not necessarily medical devices themselves, serve as safety-critical components in healthcare operations. The question is whether an ambient scribe’s documentation function constitutes a "safety component" in healthcare management.

Article 6(1): Medical Device Pathway

AI systems that are safety components of products covered by EU harmonization legislation listed in Annex I—including Medical Device Regulation (EU) 2017/745.

If an ambient scribe qualifies as a medical device or accessory under MDR, it automatically falls under high-risk via Article 6(1), regardless of Annex III analysis.

The Medical Device Question

Under MDR Article 2(1), a medical device is any instrument, apparatus, appliance, software, or other article intended for diagnosis, prevention, monitoring, treatment, or alleviation of disease. The critical question for ambient scribes: does documenting a clinical conversation constitute diagnosis or treatment support?

Pure transcription and summarization—without clinical interpretation—generally falls outside MDR scope. However, the boundary becomes unclear when the scribe:

Key Determining Factors

Classification hinges on the answer to one central question: Does the AI system’s output materially influence clinical decisions?

Classification Decision Matrix

Feature Classification Impact Rationale
Pure transcription Not high-risk No clinical interpretation; human review required
Note summarization Likely not high-risk Documentation aid; physician verifies content
Suggested ICD-10 codes Borderline May influence treatment pathways and billing
Diagnosis suggestions High-risk Direct clinical decision influence
Treatment recommendations High-risk Patient safety implications
Risk scoring/alerts High-risk Safety component in care management
CDS integration High-risk Contributes to decision support system

When Ambient Scribes ARE High-Risk

An ambient AI scribe crosses into high-risk territory in these scenarios:

Scenario 1: Clinical Decision Support Features

The scribe doesn’t just document—it analyzes. If the system suggests differential diagnoses based on symptoms mentioned in conversation, flags potential drug interactions, or recommends follow-up tests, it’s functioning as clinical decision support. This triggers high-risk classification under Annex III Category 5(a).

Scenario 2: Automatic Coding That Influences Care

When automatic ICD-10 or CPT coding isn’t merely administrative but feeds into clinical pathways—triggering care protocols, alerting about chronic disease management, or affecting treatment authorization—the system becomes a safety component.

Scenario 3: MDR Medical Device Classification

If national competent authorities or notified bodies determine the scribe qualifies as a medical device under MDR (Class I with clinical function, Class IIa, or higher), Article 6(1) applies automatically. Some ambient scribes that extract vital signs, calculate clinical scores, or generate structured clinical data may fall into this category.

Scenario 4: Integration with High-Risk Systems

Even a simple transcription scribe becomes high-risk if its output feeds directly into a high-risk clinical decision support system. The AI system’s classification considers its role within the broader system architecture.

When Ambient Scribes Are NOT High-Risk

Pure documentation tools that meet the following criteria generally remain outside high-risk classification:

Characteristics of Non-High-Risk Ambient Scribes

  • Transcription only: Converts speech to text without clinical interpretation
  • Human review mandatory: Physician must review and approve before documentation is finalized
  • No clinical suggestions: System doesn’t propose diagnoses, treatments, or risk assessments
  • Administrative function: Output serves documentation purposes, not clinical workflow triggers
  • Not MDR-classified: Doesn’t meet medical device definition under EU 2017/745

Edge Cases and Ambiguities

Several ambient scribe features occupy a regulatory gray zone:

Problem List Updates

If the scribe automatically updates the patient’s problem list based on conversation content, is this clinical decision support? Regulators may view this differently depending on whether the update requires physician approval or happens automatically.

Medication Reconciliation

Scribes that identify medications mentioned in conversation and cross-reference with the medication list straddle the line. If the system simply flags discrepancies for human review, it’s likely administrative. If it triggers automatic alerts or modifies records, classification becomes uncertain.

Quality Measure Extraction

Extracting data for quality reporting (HEDIS, MIPS) from clinical conversations could be viewed as purely administrative—or as influencing care by highlighting gaps in quality measure compliance.

Regulatory Guidance Pending

The European Commission and AI Office are expected to issue additional guidance on healthcare AI classification. Until then, vendors should document their classification rationale thoroughly and consider voluntary compliance with high-risk requirements for borderline systems.

Requirements If Classified High-Risk

High-risk AI systems under the EU AI Act must comply with Articles 9-15, establishing comprehensive obligations:

Article 9: Risk Management System

Establish, implement, document, and maintain a risk management system throughout the AI system’s lifecycle. Identify and analyze known and foreseeable risks, estimate and evaluate risks, adopt appropriate risk management measures.

Article 10: Data and Data Governance

Training, validation, and testing datasets must be relevant, representative, and free of errors. Data governance practices must address data collection, preparation, and documentation.

Article 11: Technical Documentation

Comprehensive documentation demonstrating compliance, including system description, design specifications, development process, monitoring, and post-market activities.

Article 13: Transparency

Provide clear instructions for use, including intended purpose, level of accuracy, known limitations, and human oversight requirements.

Article 14: Human Oversight

Design systems to enable effective oversight by natural persons. Include functionality allowing operators to understand capabilities, interpret outputs, and override or reverse the system.

Article 15: Accuracy, Robustness, Cybersecurity

Achieve appropriate levels of accuracy and robustness. Implement cybersecurity measures proportionate to risks.

Article 12: Logging Implications

Article 12 is particularly relevant for ambient AI scribes and represents a core area of GLACIS expertise. High-risk systems must be designed to automatically record events ("logs") throughout their operation.

What Must Be Logged

For ambient scribes classified as high-risk, Article 12 logging requirements would include:

GLACIS and Article 12 Compliance

Article 12 logging creates the evidence foundation for proving AI system compliance. GLACIS specializes in transforming these operational logs into cryptographically-attested compliance evidence—demonstrating not just that controls exist, but that they actually work in production.

Learn about Continuous Attestation

Retention Requirements

Logs must be kept for a period appropriate to the intended purpose of the high-risk AI system and applicable legal obligations. For healthcare AI, this typically means aligning with medical record retention requirements—often 7+ years in EU member states.

US Regulatory Comparison

Understanding how EU AI Act classification differs from US regulation helps vendors operating in both markets:

EU vs. US Regulatory Comparison

Aspect EU AI Act US (FDA/HIPAA/State)
Pure documentation scribes Generally not high-risk Not FDA-regulated as medical device
Clinical decision features High-risk under Annex III May require FDA clearance as CDS
Recording consent GDPR consent + AI Act transparency HIPAA + state two-party consent laws
Logging requirements Article 12 mandates automatic logging No specific AI logging mandate
Maximum penalties €15M or 3% global revenue Varies by violation type

Key difference: The EU AI Act’s Annex III can capture AI systems that the FDA wouldn’t regulate. A scribe that extracts clinical data and influences care pathways may be unregulated in the US but high-risk in the EU. For more on US-specific privacy requirements, see our Ambient AI Scribe Privacy Compliance Guide, covering the Sharp lawsuit, CIPA liability, and consent requirements.

Implementation Checklist

Use this checklist to assess your ambient AI scribe’s classification and compliance status:

Classification Assessment

  • Document all system features and intended purposes
  • Assess each feature against Annex III categories
  • Evaluate whether system qualifies as MDR medical device
  • Map output usage in clinical workflows
  • Document classification rationale with legal review

If Classified High-Risk

  • Establish risk management system (Article 9)
  • Implement data governance procedures (Article 10)
  • Create comprehensive technical documentation (Article 11)
  • Build automatic logging infrastructure (Article 12)
  • Ensure transparency and instructions for use (Article 13)
  • Design human oversight mechanisms (Article 14)
  • Validate accuracy and implement cybersecurity (Article 15)
  • Conduct conformity assessment (self or third-party)
  • Register in EU database for high-risk AI systems

Evidence Requirements for Regulators

  • Classification analysis documentation
  • Risk assessment and mitigation records
  • Training data documentation and validation results
  • Logging infrastructure audit trail
  • Human oversight design specifications
  • Accuracy testing and performance monitoring data

Frequently Asked Questions

Is an ambient AI scribe high-risk under the EU AI Act?

It depends on the system’s function. Pure documentation scribes that only transcribe and summarize conversations are generally NOT high-risk. However, ambient AI systems become high-risk if they influence clinical decisions by suggesting diagnoses, treatments, or risk scores; qualify as medical devices under MDR; or integrate with clinical decision support systems. The key test is whether the AI’s output materially influences patient care decisions.

What Annex III category applies to ambient AI scribes?

Ambient AI scribes may fall under Annex III Category 5(a)—AI intended to be used as safety components in the management and operation of healthcare—or under Article 6(1) as medical devices requiring conformity assessment. The relevant category depends on whether the scribe functions as documentation only or influences clinical decisions.

Does Article 12 logging apply to ambient AI scribes?

If classified as high-risk, Article 12 requires automatic logging of events throughout the system’s lifetime. For ambient scribes, this means logging each recording session with timestamps, user identification, input audio characteristics, model version and configuration, generated outputs, any error states, and human review actions. Logs must be retained for the period appropriate to the intended purpose.

Are Abridge, Nuance DAX, and Suki high-risk under EU AI Act?

These systems require individual assessment. Core transcription features are likely not high-risk. However, features like automatic coding suggestions, clinical decision support integrations, or risk scoring would trigger high-risk classification. Vendors must evaluate each feature independently against Annex III criteria.

What happens if I misclassify my ambient AI scribe?

Misclassification creates significant liability. If you classify as non-high-risk but regulators determine the system is high-risk, you face penalties up to €15 million or 3% of global annual turnover. You would also need to immediately halt deployment until conformity requirements are met, including risk management systems, technical documentation, and potentially third-party conformity assessment.

How does EU AI Act classification differ from FDA regulation?

The FDA generally does not regulate pure documentation scribes as medical devices because they don’t provide clinical decision support. The EU AI Act takes a broader approach—even if not an MDR medical device, an AI system can still be high-risk under Annex III if it’s a safety component in healthcare. This means some ambient scribes may be unregulated in the US but high-risk in the EU.

When must ambient AI scribe vendors comply with high-risk requirements?

High-risk AI systems must achieve full conformity by August 2, 2026. Medical devices under MDR have until August 2, 2027. Vendors should begin classification assessment and compliance planning immediately, as building conformity infrastructure—risk management systems, technical documentation, quality management, logging—requires 6-12 months minimum.

Key Takeaways

  • Classification depends on function — pure transcription is not high-risk; clinical decision influence triggers high-risk
  • Assess each feature independently — a scribe with one high-risk feature becomes a high-risk system
  • Article 12 logging is essential — high-risk systems must log all operational events automatically
  • EU scope is broader than FDA — systems unregulated in the US may be high-risk in the EU
  • Document classification rationale — regulators will scrutinize your analysis
  • August 2026 deadline — begin compliance work now; infrastructure build requires 6-12 months

For more on ambient AI scribe compliance, explore our related resources:

Need EU AI Act Compliance Evidence?

GLACIS helps AI vendors build the Article 12 logging infrastructure and continuous attestation evidence that demonstrates conformity to regulators and healthcare buyers.

Start Free Assessment

Related Guides