Detailed Comparison
| Aspect | 🇬🇧 UK | 🇪🇺 EU AI Act |
|---|---|---|
| Regulatory Structure | Principles-based, sectoral. Existing regulators (FCA, MHRA, ICO, Ofcom) apply principles within their domains. | Horizontal regulation. Single legal framework applies across all sectors with uniform requirements. |
| Central Authority | None. AI Security Institute evaluates frontier AI but doesn’t regulate. DRCF coordinates regulators. | European AI Office at EU level. Each member state designates national competent authorities. |
| Risk Classification | No formal tiers. Risk assessment left to individual regulators and organisations. | Four tiers: Unacceptable (banned), High-risk (strict requirements), Limited risk (transparency), Minimal (no requirements). |
| Prohibited Practices | No AI-specific prohibitions in law. Existing laws (Equality Act, GDPR) apply. | Explicit bans: social scoring, real-time remote biometric ID (exceptions), manipulation, emotion recognition in workplaces/schools. |
| High-Risk Requirements | Depends on sector. FCA: Consumer Duty, SM&CR. MHRA: medical device rules. No unified AI-specific requirements. | Conformity assessment, risk management, data governance, logging, human oversight, transparency, accuracy/robustness testing, registration. |
| Documentation | Existing sectoral requirements apply. No AI-specific documentation mandates. | Extensive: technical documentation, quality management system, instructions for use, conformity declaration, EU registration. |
| Penalties | Vary by regulator. FCA can impose unlimited fines. ICO up to £17.5M/4% turnover. | Up to €35M or 7% global turnover (prohibited), €15M or 3% (high-risk), €7.5M or 1% (transparency). |
| Timeline | Ongoing. No comprehensive AI law. Potential legislation 2026. | Prohibitions: Feb 2025. GPAI: Aug 2025. Full application: Aug 2026. |
| Legal Basis | Non-statutory principles. Relies on existing legislation (UK GDPR, sector laws). | Directly applicable EU regulation with legal force in all member states. |
Extraterritorial Impact: When EU Rules Apply to UK Companies
Critical for UK Organisations
The EU AI Act applies to UK companies when they place AI systems on the EU market or when their AI outputs are used in the EU. This includes SaaS products accessible to EU customers and AI embedded in products sold in the EU.
The EU AI Act has broad extraterritorial reach. Article 2 specifies it applies to:
- Providers placing AI systems on the EU market—regardless of where they’re established
- Deployers of AI systems located within the EU
- Providers and deployers in third countries where AI output is used in the EU
- Importers and distributors of AI systems in the EU
Practical Implications
For UK companies, this means:
- Selling AI software to EU customers triggers EU AI Act compliance
- EU subsidiaries using UK-developed AI must ensure compliance
- AI outputs affecting EU citizens (e.g., credit decisions, content moderation) may trigger obligations
- Products containing AI sold in the EU must meet EU AI Act requirements
Dual Compliance Strategy
Organisations operating in both markets should consider a "highest common denominator" approach—building systems that meet EU AI Act requirements, which will inherently satisfy UK expectations.
Recommended Approach
Classify Your AI Systems Under EU AI Act
Determine risk tier (unacceptable, high, limited, minimal) for each AI system. This provides a structured framework even for UK-only operations.
Build to EU Standards
Implement EU AI Act requirements (documentation, risk management, human oversight) which exceed UK expectations and prepare for potential future UK legislation.
Layer UK Sectoral Requirements
Add UK-specific obligations from relevant regulators (FCA Consumer Duty, MHRA medical device rules, ICO ADM requirements) on top of EU compliance.
Maintain Dual Documentation
EU requires specific documentation formats. UK regulators may accept different formats. Maintain both where necessary.
Key Differences in Practice
Risk Assessment
No prescribed methodology. Organisations determine approach. Regulators expect "proportionate" risk consideration aligned with the 5 principles.
Article 9 mandates risk management systems for high-risk AI with specific requirements: identification, analysis, evaluation, and mitigation throughout lifecycle.
Human Oversight
DUAA requires "meaningful human intervention" for ADM. ICO provides guidance. Specific requirements vary by sector (e.g., FCA SM&CR accountability).
Article 14 mandates human oversight for high-risk AI with specific capabilities: understanding, monitoring, interpreting, deciding to override, and stopping the system.
Transparency
"Appropriate transparency" principle. ICO guidance on explaining AI decisions. No mandatory disclosures for AI interaction (unlike EU chatbot rules).
Article 50: Users must be informed when interacting with AI (chatbots), viewing synthetic content, or subject to emotion recognition/biometric categorisation.
Timeline Comparison
| Date | 🇬🇧 UK Development | 🇪🇺 EU AI Act Deadline |
|---|---|---|
| Feb 2025 | AI Safety → Security Institute rename | Prohibited AI practices banned |
| June 2025 | DUAA Royal Assent | — |
| Aug 2025 | DUAA Stage 1 effective | GPAI model obligations apply |
| Aug 2026 | Potential UK AI Bill? | Full EU AI Act application |
| Aug 2027 | — | Extended timeline for existing medical AI |
Sector-Specific Considerations
Financial Services
- • Consumer Duty applies to AI outcomes
- • SM&CR accountability for AI decisions
- • SS1/23 Model Risk Management
- • No AI-specific rules (confirmed Dec 2025)
- • Credit scoring AI is high-risk (Annex III)
- • Insurance underwriting AI is high-risk
- • Full conformity assessment required
- • Mandatory registration in EU database
Healthcare
- • AI Airlock regulatory sandbox
- • Medical device regulations apply
- • CE marking valid until June 2030
- • Post-market surveillance from June 2025
- • AI medical devices are high-risk
- • Dual compliance: AI Act + MDR/IVDR
- • Conformity assessment via notified body
- • Extended timeline to Aug 2027
How GLACIS Supports Dual UK/EU Compliance
Operating in both markets means meeting two different standards—the EU's prescriptive requirements and the UK's principle-based expectations. GLACIS provides a single evidence infrastructure that satisfies both, avoiding parallel compliance programmes.
Build Once, Prove to Both
GLACIS attestation records are structured to meet EU AI Act documentation requirements (Article 11) while also satisfying UK sectoral regulator expectations. One evidence infrastructure, two compliance outcomes.
EU AI Act Technical Documentation
High-risk AI systems need extensive technical files under the EU AI Act. GLACIS generates continuous evidence of risk management, data governance, human oversight, and accuracy—core Annex IV requirements.
UK Principles Evidence
UK regulators want proof of outcomes, not process checklists. GLACIS captures what actually happened—the five principles in action—giving FCA, MHRA, or ICO the evidence they need without prescriptive formats.
Mapping GLACIS to Dual Compliance
| Requirement | 🇪🇺 EU AI Act | 🇬🇧 UK Approach | GLACIS Evidence |
|---|---|---|---|
| Risk Management | Article 9 RMS | Sectoral guidance | Continuous risk attestation with timestamped controls |
| Human Oversight | Article 14 | DUAA meaningful intervention | Override and escalation records with operator context |
| Transparency | Article 13 / Article 50 | Principle 2 | Full audit trail exportable in multiple formats |
| Accuracy/Robustness | Article 15 | Principle 1 | Performance metrics and guardrail trigger records |
| Post-Market Monitoring | Article 72 | Sectoral PMS | Continuous production attestation for incident correlation |