JPM San Francisco 2026 Read Briefing
🇬🇧 vs 🇪🇺 Framework Comparison • January 2026

UK vs EU AI Act: Which Rules Apply to You?

A practical guide for organisations navigating two fundamentally different approaches to AI regulation—and how to achieve compliance in both markets.

The Bottom Line

The UK and EU have taken fundamentally different approaches to AI regulation. The EU AI Act is a comprehensive, horizontal regulation with prescriptive requirements across risk tiers. The UK’s pro-innovation framework relies on existing sectoral regulators to apply five non-statutory principles.

Critical point: UK companies placing AI systems on the EU market, or whose AI outputs affect EU citizens, must comply with the EU AI Act regardless of UK rules. The UK’s lighter-touch approach offers no exemption from EU requirements.

🇬🇧

UK Approach

  • Principles-based, sectoral regulation
  • 5 non-statutory principles
  • No central AI authority
  • Outcome-focused, flexible
  • Prioritises innovation and growth
🇪🇺

EU AI Act

  • Horizontal, prescriptive regulation
  • 4 risk tiers with specific requirements
  • European AI Office + national authorities
  • Process-focused, compliance-driven
  • Prioritises safety and rights

Detailed Comparison

Aspect 🇬🇧 UK 🇪🇺 EU AI Act
Regulatory Structure Principles-based, sectoral. Existing regulators (FCA, MHRA, ICO, Ofcom) apply principles within their domains. Horizontal regulation. Single legal framework applies across all sectors with uniform requirements.
Central Authority None. AI Security Institute evaluates frontier AI but doesn’t regulate. DRCF coordinates regulators. European AI Office at EU level. Each member state designates national competent authorities.
Risk Classification No formal tiers. Risk assessment left to individual regulators and organisations. Four tiers: Unacceptable (banned), High-risk (strict requirements), Limited risk (transparency), Minimal (no requirements).
Prohibited Practices No AI-specific prohibitions in law. Existing laws (Equality Act, GDPR) apply. Explicit bans: social scoring, real-time remote biometric ID (exceptions), manipulation, emotion recognition in workplaces/schools.
High-Risk Requirements Depends on sector. FCA: Consumer Duty, SM&CR. MHRA: medical device rules. No unified AI-specific requirements. Conformity assessment, risk management, data governance, logging, human oversight, transparency, accuracy/robustness testing, registration.
Documentation Existing sectoral requirements apply. No AI-specific documentation mandates. Extensive: technical documentation, quality management system, instructions for use, conformity declaration, EU registration.
Penalties Vary by regulator. FCA can impose unlimited fines. ICO up to £17.5M/4% turnover. Up to €35M or 7% global turnover (prohibited), €15M or 3% (high-risk), €7.5M or 1% (transparency).
Timeline Ongoing. No comprehensive AI law. Potential legislation 2026. Prohibitions: Feb 2025. GPAI: Aug 2025. Full application: Aug 2026.
Legal Basis Non-statutory principles. Relies on existing legislation (UK GDPR, sector laws). Directly applicable EU regulation with legal force in all member states.

Extraterritorial Impact: When EU Rules Apply to UK Companies

Critical for UK Organisations

The EU AI Act applies to UK companies when they place AI systems on the EU market or when their AI outputs are used in the EU. This includes SaaS products accessible to EU customers and AI embedded in products sold in the EU.

The EU AI Act has broad extraterritorial reach. Article 2 specifies it applies to:

  • Providers placing AI systems on the EU market—regardless of where they’re established
  • Deployers of AI systems located within the EU
  • Providers and deployers in third countries where AI output is used in the EU
  • Importers and distributors of AI systems in the EU

Practical Implications

For UK companies, this means:

  • Selling AI software to EU customers triggers EU AI Act compliance
  • EU subsidiaries using UK-developed AI must ensure compliance
  • AI outputs affecting EU citizens (e.g., credit decisions, content moderation) may trigger obligations
  • Products containing AI sold in the EU must meet EU AI Act requirements

Dual Compliance Strategy

Organisations operating in both markets should consider a "highest common denominator" approach—building systems that meet EU AI Act requirements, which will inherently satisfy UK expectations.

Recommended Approach

1

Classify Your AI Systems Under EU AI Act

Determine risk tier (unacceptable, high, limited, minimal) for each AI system. This provides a structured framework even for UK-only operations.

2

Build to EU Standards

Implement EU AI Act requirements (documentation, risk management, human oversight) which exceed UK expectations and prepare for potential future UK legislation.

3

Layer UK Sectoral Requirements

Add UK-specific obligations from relevant regulators (FCA Consumer Duty, MHRA medical device rules, ICO ADM requirements) on top of EU compliance.

4

Maintain Dual Documentation

EU requires specific documentation formats. UK regulators may accept different formats. Maintain both where necessary.

Key Differences in Practice

Risk Assessment

🇬🇧 UK

No prescribed methodology. Organisations determine approach. Regulators expect "proportionate" risk consideration aligned with the 5 principles.

🇪🇺 EU

Article 9 mandates risk management systems for high-risk AI with specific requirements: identification, analysis, evaluation, and mitigation throughout lifecycle.

Human Oversight

🇬🇧 UK

DUAA requires "meaningful human intervention" for ADM. ICO provides guidance. Specific requirements vary by sector (e.g., FCA SM&CR accountability).

🇪🇺 EU

Article 14 mandates human oversight for high-risk AI with specific capabilities: understanding, monitoring, interpreting, deciding to override, and stopping the system.

Transparency

🇬🇧 UK

"Appropriate transparency" principle. ICO guidance on explaining AI decisions. No mandatory disclosures for AI interaction (unlike EU chatbot rules).

🇪🇺 EU

Article 50: Users must be informed when interacting with AI (chatbots), viewing synthetic content, or subject to emotion recognition/biometric categorisation.

Timeline Comparison

Date 🇬🇧 UK Development 🇪🇺 EU AI Act Deadline
Feb 2025 AI Safety → Security Institute rename Prohibited AI practices banned
June 2025 DUAA Royal Assent
Aug 2025 DUAA Stage 1 effective GPAI model obligations apply
Aug 2026 Potential UK AI Bill? Full EU AI Act application
Aug 2027 Extended timeline for existing medical AI

Sector-Specific Considerations

Financial Services

🇬🇧 UK (FCA/PRA)
  • • Consumer Duty applies to AI outcomes
  • • SM&CR accountability for AI decisions
  • • SS1/23 Model Risk Management
  • • No AI-specific rules (confirmed Dec 2025)
🇪🇺 EU AI Act
  • • Credit scoring AI is high-risk (Annex III)
  • • Insurance underwriting AI is high-risk
  • • Full conformity assessment required
  • • Mandatory registration in EU database

Healthcare

🇬🇧 UK (MHRA)
  • • AI Airlock regulatory sandbox
  • • Medical device regulations apply
  • • CE marking valid until June 2030
  • • Post-market surveillance from June 2025
🇪🇺 EU AI Act + MDR
  • • AI medical devices are high-risk
  • • Dual compliance: AI Act + MDR/IVDR
  • • Conformity assessment via notified body
  • • Extended timeline to Aug 2027

How GLACIS Supports Dual UK/EU Compliance

Operating in both markets means meeting two different standards—the EU's prescriptive requirements and the UK's principle-based expectations. GLACIS provides a single evidence infrastructure that satisfies both, avoiding parallel compliance programmes.

Build Once, Prove to Both

GLACIS attestation records are structured to meet EU AI Act documentation requirements (Article 11) while also satisfying UK sectoral regulator expectations. One evidence infrastructure, two compliance outcomes.

EU AI Act Technical Documentation

High-risk AI systems need extensive technical files under the EU AI Act. GLACIS generates continuous evidence of risk management, data governance, human oversight, and accuracy—core Annex IV requirements.

UK Principles Evidence

UK regulators want proof of outcomes, not process checklists. GLACIS captures what actually happened—the five principles in action—giving FCA, MHRA, or ICO the evidence they need without prescriptive formats.

Mapping GLACIS to Dual Compliance

Requirement 🇪🇺 EU AI Act 🇬🇧 UK Approach GLACIS Evidence
Risk Management Article 9 RMS Sectoral guidance Continuous risk attestation with timestamped controls
Human Oversight Article 14 DUAA meaningful intervention Override and escalation records with operator context
Transparency Article 13 / Article 50 Principle 2 Full audit trail exportable in multiple formats
Accuracy/Robustness Article 15 Principle 1 Performance metrics and guardrail trigger records
Post-Market Monitoring Article 72 Sectoral PMS Continuous production attestation for incident correlation

Operating in Both UK and EU Markets?

Get a dual-compliance assessment identifying gaps in both frameworks—and a roadmap to close them efficiently.

Get Free Assessment

Related Guides