JPM San Francisco 2026 Read Briefing
🏦 UK Financial Services • January 2026

UK Financial Services AI: FCA & PRA Compliance

How Consumer Duty, SM&CR, and SS1/23 Model Risk Management apply to AI in banking, insurance, and investment management—with no AI-specific rules on the horizon.

15 min read

Executive Summary

The FCA and PRA regulate AI in UK financial services through existing frameworks rather than AI-specific rules. In December 2025, FCA CEO Nikhil Rathi confirmed no AI-specific regulations are planned, citing the technology’s rapid evolution "every three to six months."

Key frameworks include Consumer Duty (good customer outcomes), SM&CR (senior management accountability), and PRA SS1/23 (model risk management for banks using internal models). The FCA’s 2024 survey found 75% of firms already use AI, with 84% having an accountable individual for their AI approach.

Key finding: While no prescriptive AI rules exist, firms must demonstrate that AI-driven outcomes meet existing regulatory expectations—particularly around fairness, transparency, and consumer protection. The FCA will "intervene in cases of egregious failures."

75%
Firms Using AI
84%
Have AI Accountable Person
17%
Using Foundation Models
May 2024
SS1/23 Effective

In This Guide

FCA’s Approach to AI

The FCA published its AI Update in April 2024, setting out how it expects firms to manage AI within existing regulatory frameworks. The core message: outcomes-focused regulation applies equally to AI.

No AI-Specific Rules

In December 2025, FCA CEO Nikhil Rathi confirmed the FCA will not introduce AI-specific rules:

"We do not plan to introduce extra regulations for AI. Instead, we'll rely on existing frameworks... The technology evolves every three to six months, making prescriptive rules impractical."

Key Regulatory Frameworks

The FCA relies on these existing frameworks for AI oversight:

  • Threshold Conditions: Firms must remain fit, proper, and capable of being effectively supervised
  • Consumer Duty: Firms must deliver good outcomes for retail customers
  • SM&CR: Senior managers are accountable for AI governance within their responsibilities
  • Principles for Businesses: Including Principle 6 (customers’ interests) and Principle 7 (communications)

Enforcement Approach

The FCA will "intervene in cases of egregious failures that are not dealt with." While there's no prescriptive compliance checklist, firms must be able to demonstrate their AI systems produce fair, transparent outcomes.

Consumer Duty & AI

The Consumer Duty (in force since July 2023) is the FCA’s primary lens for assessing AI in retail financial services. It requires firms to act to deliver good outcomes across four areas:

1

Products and Services

AI used in product design, recommendation engines, or personalisation must produce products that meet customer needs. Algorithmic bias that leads to unsuitable recommendations violates this outcome.

2

Price and Value

AI pricing algorithms must deliver fair value. Dynamic pricing or personalised offers must not exploit behavioural biases or create unfair outcomes for vulnerable customers.

3

Consumer Understanding

AI-generated communications must be clear and understandable. LLM-drafted content must meet the same standards as human-written materials.

4

Consumer Support

AI chatbots and automated support must provide equivalent quality to human support. Customers must be able to access human assistance when needed.

Practical Implications

  • Test AI systems for discriminatory outcomes before deployment
  • Monitor AI-driven customer outcomes on an ongoing basis
  • Document how AI contributes to (or risks undermining) good outcomes
  • Ensure human oversight of AI decisions affecting customers

SM&CR Accountability for AI

The Senior Managers and Certification Regime (SM&CR) ensures individual accountability for AI governance. The FCA’s 2024 survey found 72% of firms report executive leadership as accountable for AI use cases.

Accountability Expectations

Firms should consider which Senior Management Functions (SMFs) are accountable for:

  • AI strategy and governance: Often the CEO (SMF1) or a designated SMF
  • AI risk management: Chief Risk Officer (SMF4)
  • AI in customer outcomes: Relevant business line SMFs
  • AI model risk: SMF responsible for internal models (for PRA-regulated firms)
  • AI data governance: Often linked to operations or technology SMFs

FCA Finding

84% of surveyed firms have an accountable individual for their AI approach. However, accountability is often split—most firms report three or more accountable persons or bodies, which can create gaps.

PRA SS1/23: Model Risk Management

Supervisory Statement 1/23, effective from 17 May 2024, sets out the PRA’s expectations for model risk management at banks using internal models for regulatory capital. It explicitly covers AI and machine learning models.

Scope

SS1/23 applies to UK-incorporated banks, building societies, and PRA-designated investment firms with internal model approval for:

  • Credit risk (IRB approach)
  • Market risk (IMA approach)
  • Counterparty credit risk (IMM approach)

The Five Principles

Principle AI/ML Implications
1. Model Identification & Classification All AI/ML models must be in the model inventory. Foundation models (LLMs) require documented use cases.
2. Governance Clear ownership and accountability for AI models. Board oversight of material model risks.
3. Development, Implementation & Use AI model development must follow documented standards. Explainability requirements for complex models.
4. Independent Validation AI models require validation proportionate to risk. Generative AI may need tailored validation approaches.
5. Risk Mitigants Fallback mechanisms for AI model failures. Real-time monitoring for production AI systems.

Foundation Models / LLMs

SS1/23 specifically addresses foundation models:

  • Must be included in model inventories with documented use cases
  • Risk classification should reflect downstream applications
  • Validation may require novel approaches due to model complexity
  • Third-party LLMs (e.g., GPT-4, Claude) still require oversight

FCA AI Lab

The FCA launched its AI Lab in October 2024 to help firms develop AI safely and responsibly. It comprises five initiatives:

Supercharged Sandbox

Test AI innovations with real consumers in a controlled regulatory environment.

AI Live Testing

Work directly with the FCA to develop, assess, and deploy AI systems in UK markets. Confirmed September 2025; participating from October 2025.

AI Spotlight

Analysis of emerging AI trends and their regulatory implications.

AI Sprint

Time-limited initiatives addressing sector-wide AI challenges. Feedback published April 2025.

AI Input Zone

Channel for industry feedback on AI challenges. Open November 2024–January 2025.

AI Use Cases & Regulatory Risks

The FCA's 2024 survey identified where financial services firms are deploying AI:

Use Case Key Regulatory Considerations
Credit Decisioning Consumer Duty fair value, explainability, bias testing, ADM rights under DUAA
Fraud Detection False positive rates, customer impact, operational resilience
Customer Service Chatbots Consumer understanding, access to human support, complaint handling
Robo-Advice Suitability, disclosure, human oversight, Consumer Duty
Claims Processing Fair treatment, explanation of decisions, escalation to humans
Risk Modelling SS1/23 model risk management, validation, documentation
AML/KYC Effectiveness, false positive management, human review

Top Perceived Constraints

According to the FCA survey, firms identify these as the largest constraints on AI adoption:

  1. Data protection and privacy (regulatory)
  2. Resilience, cybersecurity, and third-party rules (regulatory)
  3. Consumer Duty (regulatory)
  4. Safety, security, and robustness of AI models (non-regulatory)
  5. Insufficient talent and skills (non-regulatory)

How GLACIS Supports FCA & PRA Compliance

Without AI-specific rules, financial services firms must prove their AI delivers good outcomes through existing regulatory frameworks. The FCA expects evidence of Consumer Duty compliance; the PRA requires SS1/23 model documentation. GLACIS provides the infrastructure to generate this evidence continuously.

Consumer Duty Evidence

Continuous attestation captures AI outputs across all four Consumer Duty outcomes. When the FCA asks how you ensure good customer outcomes from AI-driven decisions, you have timestamped evidence of what the AI recommended and what safeguards triggered.

SM&CR Accountability Records

Link AI governance to Senior Management Functions. Our evidence packs show which controls were in place when decisions were made—supporting the 84% of firms with designated AI accountability.

SS1/23 Model Monitoring

PRA SS1/23 Principle 5 requires real-time monitoring and risk mitigants. GLACIS provides continuous observation of AI model behaviour with cryptographic evidence of when guardrails engaged—meeting the "fallback mechanisms" expectation.

Mapping GLACIS to FCA/PRA Requirements

Regulatory Requirement GLACIS Capability
Consumer Duty: Good Outcomes Audit trail of AI recommendations vs actual customer outcomes. Evidence for MI reporting.
SS1/23: Model Inventory Automatic cataloguing of AI models in scope with metadata, version history, and use cases.
SS1/23: Independent Validation Evidence packages structured for internal model validation teams or external reviewers.
SM&CR: Reasonable Steps Demonstrate senior managers took reasonable steps via control attestation records.
DUAA: ADM Rights Individual decision retrieval for DSAR and contestation requests. Meaningful human review evidence.

Need Help with Financial Services AI Compliance?

Get a tailored assessment of your AI governance against FCA and PRA expectations—with actionable recommendations.

Get Free Assessment

Related Guides