JPM San Francisco 2026 Read Briefing
🇬🇧 UK AI Regulation • January 2026

UK AI Regulation: The Pro-Innovation Approach

A comprehensive guide to the UK's principles-based, sectoral approach to AI governance—distinct from the EU AI Act and designed to balance innovation with responsible deployment.

22 min read 4,500+ words
Joe Braidwood
Joe Braidwood
CEO, GLACIS • Former SwiftKey
22 min read

Executive Summary

The United Kingdom has charted a distinctly different path from the EU on AI regulation. While the EU AI Act creates horizontal, prescriptive requirements across risk tiers, the UK’s "pro-innovation approach" relies on existing sectoral regulators to interpret and apply five core principles within their domains.

In February 2025, the AI Safety Institute was renamed the AI Security Institute, signalling a narrower focus on national security threats rather than broader issues like bias. The Data Use and Access Act 2025 (Royal Assent June 2025) eases constraints on automated decision-making under UK GDPR, allowing organisations to rely on legitimate interests rather than just consent.

Key finding: UK organisations must navigate a patchwork of sectoral guidance from the FCA, PRA, MHRA, ICO, and Ofcom. While less prescriptive than the EU AI Act, this approach creates complexity—particularly for organisations operating across multiple regulated sectors or in both UK and EU markets.

5
Core Principles
6+
Sectoral Regulators
75%
Firms Using AI (FCA)
2026
Expected AI Bill

In This Guide

The Pro-Innovation Framework

The UK's approach to AI regulation was formally established in the March 2023 White Paper titled "AI regulation: A pro-innovation approach" (Command Paper 815). This document set out a framework that explicitly prioritises innovation and flexibility over prescriptive compliance requirements.

Core Philosophy

Unlike the EU AI Act's horizontal regulation with risk-based classifications, the UK framework:

  • Empowers existing regulators to interpret and apply AI principles within their domains
  • Avoids statutory requirements—the five principles are currently non-binding guidance
  • Prioritises outcomes over processes—regulators focus on results rather than mandating specific technical measures
  • Maintains flexibility to adapt as AI technology evolves rapidly

Government Response (February 2024)

Following consultation, the government published its response on 6 February 2024, which:

  • Reaffirmed the "agile and principles-based" approach
  • Committed £10 million to boost regulators’ AI expertise
  • Required regulators (FCA, ICO, MHRA, Ofcom, CMA) to publish their AI strategic approaches by 30 April 2024
  • Indicated potential for future binding requirements on developers of the "most powerful" AI systems

AI Opportunities Action Plan (January 2025)

Prime Minister Keir Starmer announced the AI Opportunities Action Plan on 13 January 2025, endorsing all 50 recommendations from the Matt Clifford review. Key elements include:

  • £14 billion in private investment commitments
  • Creation of a National Data Library
  • New AI Energy Council to address compute infrastructure
  • Proposed UK Sovereign AI unit
  • Continued emphasis on growth and opportunity over restrictive regulation

The Five Core Principles

The UK's AI governance framework centres on five cross-sectoral principles that regulators are expected to interpret and apply within their domains:

1. Safety, Security and Robustness

AI systems should function securely, safely, and robustly throughout their lifecycle. This includes protection against cyber-attacks, adversarial inputs, and unexpected failures.

2. Appropriate Transparency and Explainability

Organisations should provide appropriate information about AI systems. The level of transparency should be proportionate to the context and potential impact of decisions.

3. Fairness

AI systems should not produce discriminatory or unfair outcomes. This aligns with existing equality legislation including the Equality Act 2010.

4. Accountability and Governance

Clear accountability structures should exist for AI systems. Organisations should have governance frameworks ensuring responsible development and deployment.

5. Contestability and Redress

Individuals should be able to challenge AI decisions and seek appropriate remedies when harmed. This includes access to human review of automated decisions.

Important: These principles are currently non-statutory. While regulators are expected to incorporate them into their guidance, there is no legal requirement for organisations to demonstrate compliance with the principles themselves—only with existing sectoral regulations as interpreted through the lens of these principles.

AI Security Institute (AISI)

The UK's AI Safety Institute was established in November 2023 as the world’s first state-backed AI evaluation body, initially funded with £100 million from the Frontier AI Taskforce.

Rename to AI Security Institute (February 2025)

On 14 February 2025, Technology Secretary Peter Kyle announced the renaming to the AI Security Institute. Speaking at the Munich Security Conference, Kyle explained the change reflects a "renewed focus" on national security and protecting citizens from crime.

Key Change in Focus

The Institute will not focus on bias or freedom of speech, but instead concentrate on "serious AI risks with security implications"—including cyber-attacks, chemical and biological weapons development, and criminal misuse such as fraud and child sexual abuse material generation.

Key Activities (2024–2025)

  • Model evaluations: Pre-deployment testing of OpenAI’s o1 model (with US AI Safety Institute), Anthropic’s latest models, and 30+ frontier AI systems
  • Research: Published the inaugural Frontier AI Trends Report (December 2025) showing AI models can now complete expert-level cyber tasks
  • Open-source tools: Released Inspect, InspectSandbox, InspectCyber, and ControlArena evaluation frameworks
  • Funding: £15 million Alignment Project, £8 million Systemic Safety Grants, £5 million Challenge Fund
  • Partnerships: New criminal misuse team with Home Office; research partnership with Google DeepMind; San Francisco office

Sectoral Regulators

Unlike the EU's centralised AI Office, the UK relies on existing regulators to govern AI within their domains. Each regulator published their strategic approach to AI in 2024.

Regulator Sector Key AI Guidance
FCA Financial Services AI Update (April 2024), AI Lab, Consumer Duty applies to AI
PRA Banks, Insurers SS1/23 Model Risk Management (effective May 2024)
MHRA Medical Devices AI Airlock sandbox, AIaMD guidance, post-market surveillance
ICO Data Protection AI and Data Protection guidance, ADM requirements
Ofcom Communications Online Safety Act AI implications, synthetic media guidance
CMA Competition Foundation Models review, AI partnership monitoring

Digital Regulation Cooperation Forum (DRCF)

The FCA, CMA, ICO, and Ofcom collaborate through the DRCF to coordinate their approaches to digital and AI regulation. In 2024–25, the DRCF launched an AI and Digital Hub to provide joint guidance to organisations navigating multiple regulatory frameworks.

UK GDPR and Automated Decision-Making

Until the Data Use and Access Act 2025, Article 22 of the UK GDPR provided the primary legal framework for automated decision-making (ADM) in the UK.

Article 22 Core Rights

Individuals have the right not to be subject to decisions based solely on automated processing (including profiling) that produce:

  • Legal effects concerning them (e.g., affecting legal status, entitlement to benefits)
  • Similarly significant effects (e.g., job offers, mortgage applications, insurance terms)

ICO Guidance

The ICO provides detailed guidance on AI and data protection, including:

  • Explaining decisions made with AI (joint guidance with Alan Turing Institute)
  • AI risk assessment toolkit for assessing individual rights impacts
  • ADM in recruitment—requiring meaningful human involvement in hiring decisions

The ICO's AI and Biometrics Strategy (June 2025) focuses on three priority areas: transparency and explainability, bias and discrimination, and rights and redress.

Data Use and Access Act 2025

The Data Use and Access Act 2025 received Royal Assent on 19 June 2025, introducing significant changes to UK data protection law and its interaction with AI.

Key Changes for Automated Decision-Making

  • Expanded lawful bases: Organisations can now rely on legitimate interests for ADM, not just consent or contractual necessity
  • Meaningful human intervention: Clarifies that a "competent person" must review automated decisions
  • Required safeguards: Individuals must be informed, able to contest decisions, and access human review
  • Special category data: Stricter regime continues to apply where sensitive data is involved

Implementation Timeline

  • Stage 1 (20 August 2025): Initial provisions came into effect
  • Stage 2 (30 September 2025): Additional changes effective
  • Stage 3 (Early 2026): ADM, direct marketing, and cookie consent changes expected
  • Full implementation: By June 2026

Regulatory Timeline

Already in Effect

Date Development
17 May 2024 PRA SS1/23 Model Risk Management effective
April 2024 FCA AI Update published
May 2024 MHRA AI Airlock pilot launched
14 Feb 2025 AI Safety Institute renamed to AI Security Institute
19 June 2025 Data Use and Access Act 2025 Royal Assent

2025–2026 Expected

Date Development
Autumn 2025 ICO ADM/profiling guidance consultation
Early 2026 DUAA ADM provisions fully effective
Summer 2026 Potential UK AI Bill introduction (after King’s Speech)
30 June 2030 MHRA: UKCA mark required for medical devices (CE marking ends)

UK vs EU AI Act: Key Differences

For organisations operating in both UK and EU markets, understanding the differences between these frameworks is critical.

Aspect UK Approach EU AI Act
Regulatory Structure Principles-based, sectoral Comprehensive horizontal legislation
Central Authority None (AISI evaluates only) European AI Office + National Authorities
Risk Classification No formal tiers Four tiers: Unacceptable, High, Limited, Minimal
Prohibited Practices None specified in law Explicit bans (social scoring, certain biometrics)
Compliance Obligations Flexible, outcome-focused Prescriptive requirements per risk tier
Statutory Basis Non-statutory principles Legally binding regulation
Current Focus (2025) Security and growth Safety and fundamental rights

Extraterritorial Impact

UK companies placing AI systems on the EU market or providing AI outputs to EU users must still comply with the EU AI Act. The UK's lighter-touch approach does not exempt organisations from EU requirements when operating in EU markets.

UK AI Compliance Checklist

While the UK lacks prescriptive AI-specific requirements, organisations should address these areas:

Identify applicable sectoral regulators

Determine which regulators (FCA, MHRA, ICO, etc.) have jurisdiction over your AI use cases

Review regulator-specific AI guidance

Each regulator has published its strategic approach—ensure your practices align

Assess ADM under UK GDPR/DUAA

Ensure automated decisions have appropriate safeguards and human review mechanisms

Document accountability structures

Designate accountable individuals for AI governance (84% of FCA-regulated firms have done this)

Consider EU AI Act obligations

If operating in EU markets, ensure compliance with EU requirements regardless of UK rules

How GLACIS Supports UK AI Compliance

The UK's principles-based approach gives organisations flexibility—but also requires them to demonstrate they’ve applied the principles appropriately. When regulators ask "how do you ensure accountability?" or "show us your transparency measures," you need evidence.

Continuous Attestation → Accountability & Governance

Real-time evidence collection with cryptographic proofs. Every AI interaction is logged with tamper-evident records—demonstrating the accountability principle to FCA, MHRA, or ICO without manual audit trails.

Evidence Pack → Regulator Inquiries

When the FCA asks how AI decisions are made, or the ICO requests ADM documentation, you have audit-ready evidence packages. Structured records showing what the AI did, why, and with what safeguards.

AI Readiness Score → Gap Assessment

Measure your alignment against all five UK principles and relevant sectoral requirements. Identify gaps before regulators find them, with prioritised remediation steps.

Mapping GLACIS to UK Principles

UK Principle GLACIS Capability
Safety, Security, Robustness Continuous monitoring detects anomalies and drift. Evidence of guardrails in action.
Transparency & Explainability Full audit trail of AI inputs, outputs, and decision factors. Exportable for individual requests (DUAA/ADM).
Fairness Sampling and attestation across user cohorts enables bias detection evidence.
Accountability & Governance Cryptographic receipts prove controls were active at time of decision. Links to SM&CR responsibilities.
Contestability & Redress Retrieval of specific decision records for individual complaints or subject access requests.

Need Help Navigating UK AI Compliance?

Get a personalised assessment of your AI governance gaps across UK sectoral requirements—and a roadmap to close them.

Get Free Assessment

Related Guides