JPM San Francisco 2026 Read Briefing
UK Healthcare AI Guide

UK Healthcare AI Regulation: MHRA, CQC, and NHS Requirements

A comprehensive guide to navigating the UK's healthcare AI regulatory landscape. Understand MHRA medical device classification, the AI Airlock sandbox, CQC oversight, and NHS adoption pathways.

15 min read January 2026 Official Sources

Executive Summary

The UK's healthcare AI regulatory framework operates through multiple specialized bodies, each with distinct responsibilities. MHRA regulates AI as medical devices through the UK MDR 2002 framework, with classification determining regulatory requirements. CQC oversees AI use within healthcare providers through its fundamental standards. NICE sets evidence standards for digital health technology adoption, while NHS AI Lab facilitates safe AI deployment across the health service.

The AI Airlock regulatory sandbox, launched by MHRA in 2024, represents a significant innovation in healthcare AI regulation. It provides a controlled pathway for novel AI medical devices to demonstrate safety and effectiveness with real patients, while informing regulatory policy development.

For organizations deploying healthcare AI in the UK, the path to compliance typically involves: (1) determining whether the AI qualifies as a medical device, (2) achieving appropriate MHRA classification and registration, (3) meeting NICE evidence standards for NHS adoption, and (4) ensuring deploying organizations satisfy CQC governance requirements.

UK Healthcare AI Regulatory Landscape

Unlike the EU's horizontal AI Act, the UK regulates healthcare AI through existing sectoral frameworks, primarily medical device regulation for AI products and healthcare provider oversight for deployment settings. This approach aligns with the UK's broader pro-innovation AI regulatory strategy.

MHRA

Medicines and Healthcare products Regulatory Agency

Regulates AI as medical devices under UK MDR 2002. Responsible for UKCA marking, classification, market surveillance, and the AI Airlock sandbox. Key authority for pre-market AI medical device approval.

Medical Devices UKCA Marking AI Airlock

CQC

Care Quality Commission

Inspects and rates healthcare providers in England. Assesses AI use through fundamental standards framework covering safe care, governance, and staffing. Evaluates whether providers properly govern and monitor AI tools.

Provider Oversight Governance England Only

NICE

National Institute for Health and Care Excellence

Sets evidence standards through the Evidence Standards Framework for Digital Health Technologies. Evaluates clinical and economic evidence for NHS adoption. Guidance informs commissioning decisions.

Evidence Standards HTA NHS Adoption

NHS AI Lab

NHS Transformation Directorate

Accelerates safe, effective AI adoption across NHS. Develops guidance, runs AI funding programs, produces Algorithm Assurance Framework, and provides procurement guidance through the Buyer's Guide to AI in Health and Care.

NHS Guidance Funding Assurance

Additional Oversight Bodies

ICO

Data protection and UK GDPR compliance for health data processing. Automated decision-making requirements under Article 22.

HRA

Health Research Authority oversees AI research involving NHS patients. Ethics approval for clinical studies.

AI Security Institute

Evaluates frontier AI safety, though healthcare-specific guidance remains with MHRA and NHS bodies.

MHRA Medical Device Classification for AI

Under UK MDR 2002, software qualifies as a medical device if it has a medical intended purpose. AI used for diagnosis, monitoring, prediction, or treatment recommendation is typically classified as Software as a Medical Device (SaMD). Classification determines regulatory requirements, from self-declaration for Class I to full conformity assessment for Class III.

Does Your AI Qualify as a Medical Device?

YES, it’s likely a medical device if it:

  • Diagnoses, prevents, monitors, predicts, or treats disease
  • Provides clinical decision support that influences treatment
  • Analyzes medical images, pathology slides, or clinical data for diagnosis
  • Monitors physiological parameters with clinical implications

NO, it’s probably not a medical device if it:

  • Provides general health and wellness information only
  • Performs purely administrative functions (scheduling, billing)
  • Acts as a simple data repository without clinical analysis
  • Supports research without direct clinical application

SaMD Classification Under UK MDR 2002

Class Risk Level AI Examples Requirements
Class I Low Wellness apps, symptom checkers providing general info only Self-declaration, UKCA marking, register with MHRA
Class IIa Medium-Low Clinical decision support, triage tools, non-critical monitoring Approved Body audit, QMS, clinical evidence
Class IIb Medium-High Diagnostic imaging AI, cancer detection, treatment planning Full Approved Body review, clinical trials may be required
Class III High AI driving life-sustaining decisions, autonomous treatment Stringent Approved Body review, prospective clinical studies

Clinical Evidence Requirements

  • Analytical validation: Accuracy, sensitivity, specificity on representative data
  • Clinical validation: Performance in intended clinical setting with UK population
  • Real-world performance: Post-market surveillance and monitoring
  • Algorithm change protocol: Re-validation requirements for model updates

Technical Documentation

  • Intended purpose: Clear statement of clinical use and user population
  • Training data: Description of data sources, quality, and representativeness
  • Risk analysis: FMEA or equivalent covering AI-specific failure modes
  • Cybersecurity: Threat model and security controls for connected devices

MHRA AI Airlock: Regulatory Sandbox

The AI Airlock is MHRA’s regulatory sandbox for AI medical devices, launched in 2024. It provides a controlled environment where developers can test novel AI/ML medical devices with real patients under regulatory supervision, generating real-world evidence while informing regulatory approaches.

How the AI Airlock Works

A phased approach to AI medical device testing

1

Application & Assessment

Developers apply with details of their AI device, intended use, and preliminary safety evidence. MHRA assesses suitability for sandbox participation.

2

Controlled Testing

Approved devices enter controlled testing with real patients in selected NHS sites. MHRA provides ongoing oversight and tailored regulatory advice.

3

Evidence Generation

Real-world evidence collected on safety, effectiveness, and AI behavior. Continuous monitoring identifies issues early.

4

Regulatory Pathway

Successful sandbox participants receive expedited pathway to full market authorization. Evidence informs broader regulatory policy.

Benefits for Developers

  • Direct engagement with MHRA on regulatory requirements
  • Real-world evidence generation in NHS settings
  • Faster pathway to market for successful devices
  • Regulatory certainty and reduced development risk

Benefits for NHS

  • Early access to promising AI innovations
  • Evidence on AI performance in UK clinical settings
  • Protected testing environment with MHRA oversight
  • Influence on regulatory standards development

AI Airlock Eligibility Criteria

Suitable Candidates:

  • Novel AI/ML medical devices
  • Adaptive or continuously learning algorithms
  • AI with limited real-world evidence
  • Innovative intended uses

Requirements:

  • Preliminary safety and performance data
  • Clear intended purpose and user population
  • Commitment to transparency with MHRA
  • NHS partner site for testing

NHS Adoption Pathway

Successfully navigating MHRA requirements is necessary but not sufficient for NHS adoption. AI vendors must also meet NICE evidence standards, NHS data and security requirements, and demonstrate value to commissioners. NHS AI Lab provides guidance throughout this journey.

NICE Evidence Standards Framework for Digital Health Technologies

1

Functional Evidence

The technology works as intended. Technical performance, usability, accessibility, and integration capabilities.

2

Clinical Evidence

Clinical outcomes improve. Comparative effectiveness, safety profile, and benefits across patient populations.

3

Economic Evidence

Cost-effectiveness demonstrated. Resource impact, value for money, and budget impact analysis.

AI-Specific NICE Considerations

  • Algorithmic transparency and explainability
  • Training data quality and representativeness
  • Ongoing performance monitoring plans
  • Generalizability across settings and populations

NHS Data Security Requirements

  • DSPT Compliance: Data Security and Protection Toolkit assessment required
  • DCB0129/DCB0160: Clinical risk management standards for health IT
  • Cyber Essentials Plus: Cybersecurity certification typically required
  • UK GDPR: Lawful basis for health data processing

NHS AI Lab Resources

  • Buyer’s Guide: Procurement guidance for AI in health and care
  • Algorithm Assurance: Framework for AI governance in NHS
  • AI Ethics Initiative: Ethical guidance for health AI development
  • AI Award: Funding for promising AI health innovations

CQC Oversight of Healthcare AI

CQC assesses AI use through its fundamental standards, evaluating whether providers have appropriate governance, staff training, and monitoring in place. Key inspection focus areas:

  • AI governance structure and accountability
  • Staff training and competency assessment
  • Clinical validation and ongoing monitoring
  • Human oversight of AI-informed decisions
  • Incident reporting and response procedures
  • Patient information and consent processes
  • Integration with clinical workflows
  • Performance monitoring and audit trails

UK vs EU Healthcare AI Requirements

Aspect UK Approach EU Approach
Regulatory Framework Sectoral (medical devices, data protection) Horizontal AI Act + MDR
High-Risk Classification Based on medical device class AI Act Annex III + MDR class
Conformity Marking UKCA CE + AI Act compliance
Regulatory Sandbox AI Airlock (MHRA) AI Act regulatory sandboxes (member states)
Fundamental Rights UK GDPR, Human Rights Act AI Act FRIA, EU Charter, GDPR
Market Access Separate from EU (mutual recognition evolving) Single market access

Dual Compliance Strategy: Organizations seeking access to both UK and EU markets should plan for compliance with both frameworks. While requirements overlap significantly, key differences in conformity marking, AI-specific requirements, and administrative processes require separate consideration. See our detailed UK vs EU AI Act comparison →

Key Takeaways

For AI Developers

  • 1 Determine early whether your AI qualifies as a medical device and its likely classification
  • 2 Consider the AI Airlock for novel or adaptive AI devices requiring real-world evidence
  • 3 Build NICE evidence requirements into development from the start
  • 4 Ensure robust documentation of training data, validation, and performance monitoring

For Healthcare Providers

  • 1 Verify UKCA marking and MHRA registration before procuring AI medical devices
  • 2 Establish AI governance structures that meet CQC expectations
  • 3 Ensure clinical validation in your specific patient population
  • 4 Implement ongoing performance monitoring and incident reporting

How GLACIS Supports UK Healthcare AI Compliance

UK healthcare AI faces overlapping regulatory expectations from MHRA, CQC, NHS, and NICE. Each wants evidence that your AI is safe, effective, and well-governed—but they want it in different forms. GLACIS provides the underlying evidence infrastructure that feeds all of these requirements.

Post-Market Surveillance

MHRA post-market surveillance (effective June 2025) requires continuous monitoring of AI medical device performance. GLACIS continuously attests AI behaviour with timestamped, tamper-evident records—feeding your vigilance reporting and trend analysis.

CQC Inspection Evidence

When CQC inspectors ask how you ensure AI is integrated safely into clinical workflows, evidence packs show what controls were active, when they triggered, and how outcomes were monitored—supporting fundamental standards compliance.

NICE Evidence Requirements

NICE Digital Technology Evidence Framework requires ongoing performance monitoring evidence. GLACIS captures real-world performance data with cryptographic integrity—supporting your functional, clinical, and economic evidence dossier.

Algorithm Assurance

NHS AI Lab’s Algorithm Assurance Framework asks how you monitor for drift and bias. GLACIS samples AI outputs across patient cohorts, creating attestation records that demonstrate ongoing algorithmic fairness and stability.

Mapping GLACIS to UK Healthcare Regulatory Requirements

Regulatory Requirement GLACIS Capability
MHRA Post-Market Surveillance Continuous monitoring with incident-correlated evidence. Trend analysis data for periodic safety reports.
CQC Safe Care & Treatment Audit trail of AI recommendations, clinician overrides, and outcome tracking. Evidence of human oversight.
NICE Performance Monitoring Real-world accuracy metrics with cryptographic integrity. Exportable for HTAs and procurement evaluations.
NHS Algorithm Assurance Cohort-stratified sampling for bias detection. Model version tracking and drift alerts.
ICO ADM / DUAA Rights Individual decision retrieval for patient access requests. Meaningful human involvement records.

Navigating UK Healthcare AI Compliance?

GLACIS helps healthcare AI developers and deployers build auditable evidence of responsible AI deployment. Our continuous attestation platform creates verifiable records that satisfy regulatory expectations.