Annex III Category Analysis: Essential Private Services
The EU AI Act classifies AI systems by risk level, with high-risk systems subject to the most stringent requirements. Insurance AI falls under Annex III, Category 5: Access to and enjoyment of essential private services and essential public services and benefits.
Specifically, Annex III, point 5(b) covers:
"AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud" and "AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance."
The rationale is clear: insurance decisions can materially affect individuals’ access to essential services. Life and health insurance denial or unaffordable pricing can leave individuals without crucial financial protection during illness, disability, or death.
Why Life and Health Insurance Specifically?
The European Commission’s impact assessment identified life and health insurance as "essential private services" because:
- Fundamental rights impact: Denial of health insurance affects access to healthcare, a fundamental right under the EU Charter
- Asymmetric information: Insurers have sophisticated analytical capabilities; individuals cannot meaningfully contest AI-driven decisions
- Discrimination risk: AI systems trained on historical data may perpetuate or amplify discriminatory patterns
- Limited alternatives: Unlike property insurance, individuals cannot easily forgo life or health coverage
Scope: What Counts as High-Risk Insurance AI
Understanding the precise scope of "risk assessment and pricing" is critical for classification. The regulation targets AI systems that make or materially influence decisions about individual insurance applicants.
Covered Activities
| Activity | High-Risk? | Reasoning |
|---|---|---|
| Individual underwriting | Yes | Directly affects access to life/health insurance |
| Premium pricing for individuals | Yes | Unaffordable premiums effectively deny access |
| Risk scoring/classification | Yes | Foundational to underwriting and pricing decisions |
| Claims assessment (denial/approval) | Likely Yes | Affects enjoyment of purchased coverage |
| Policy renewal decisions | Yes | Non-renewal affects continued access |
| Fraud detection | Excluded | Explicitly carved out in Annex III |
Key Determining Factors
Four factors determine whether insurance AI is high-risk:
1. Insurance Type
- ! Life insurance: High-risk
- ! Health insurance: High-risk
- - Property/casualty: Not explicitly listed
- - Commercial lines: Not explicitly listed
2. Subject of Decision
- ! Natural persons (individuals): Covered
- - Legal persons (companies): Not covered
- - Group policies: Depends on individual impact
3. Decision Impact
- ! Coverage denial: High-risk
- ! Pricing (material): High-risk
- - Minor administrative: Likely not high-risk
4. AI System Role
- ! Autonomous decision: High-risk
- ! Decision support (material): High-risk
- ? Pure analytics/reporting: Gray area
When Insurance AI IS High-Risk
Insurance AI is definitively high-risk when it meets the following criteria:
High-Risk Classification Applies When:
- Life or health insurance underwriting, pricing, or claims decisions for individual natural persons
- AI system makes or materially influences the decision (not purely informational)
- System is placed on EU market or used in EU (regardless of provider location)
- Output affects EU residents, even if system is operated from outside EU
Examples of high-risk insurance AI:
- ML model predicting mortality risk for life insurance applications
- Health insurance premium optimization algorithm using individual health data
- Automated claims triage system that denies or delays health insurance claims
- Risk scoring system used to determine life insurance policy renewals
When Insurance AI May NOT Be High-Risk
Certain insurance AI applications may fall outside the high-risk classification:
Potential Exclusions from High-Risk:
- Property and casualty insurance (auto, home, commercial) - not explicitly listed
- Commercial/corporate insurance (legal persons, not natural persons)
- Fraud detection systems - explicitly excluded in Annex III
- Internal analytics not affecting individual decisions (portfolio analysis, reserving)
- Customer service chatbots providing general information (limited risk, transparency only)
Important caveat: Property and casualty insurance AI may still be caught under Annex III’s broader "essential services" language if it materially affects individuals’ access to housing (homeowners insurance) or transportation (auto insurance). Regulators may interpret this expansively.
Requirements If Classified as High-Risk (Articles 9-15)
High-risk insurance AI systems must comply with seven core requirements under Articles 9-15 before placement on the EU market:
Risk Management System
Continuous, iterative process throughout the AI system lifecycle. Identify and analyze known and foreseeable risks. Estimate and evaluate risks. Adopt risk mitigation measures. Test to ensure appropriate performance.
Data and Data Governance
Training, validation, and testing data must be relevant, representative, and free of errors. Examine data for biases. Ensure appropriate statistical properties for the intended purpose.
Technical Documentation
Comprehensive documentation per Annex IV covering system design, development, capabilities, limitations, and monitoring procedures. Must demonstrate conformity assessment compliance.
Record-Keeping (Logging)
Automatic logging of events during system operation. Enable traceability of AI functioning. Logs must be retained for appropriate periods and accessible for audits. Critical for insurance: document every underwriting and pricing decision.
Transparency and Information
Instructions for use enabling deployers to understand system capabilities, limitations, and appropriate use. Clear information about AI involvement in decisions affecting individuals.
Human Oversight
Design systems for effective human oversight. Enable human intervention, including ability to override or reverse AI decisions. Prevent automation bias. Insurance: human review of adverse underwriting decisions.
Accuracy, Robustness, Cybersecurity
Achieve appropriate levels of accuracy, robustness, and cybersecurity. Resilient against errors, faults, and attempts at manipulation. Performance consistent across relevant conditions.
Article 12 Logging Requirements: GLACIS Core Relevance
Article 12 logging requirements are particularly critical for insurance AI and represent a core area where GLACIS provides value. The regulation requires:
Article 12 Logging Requirements
- Automatic recording of events relevant to identifying situations that may result in risks
- Traceability of AI system functioning throughout its lifecycle
- Input data or references to input data used for decisions
- Identification of natural persons involved in result verification
- Retention for periods appropriate to intended purpose and applicable law
Insurance implications: Every underwriting decision, premium calculation, claims assessment, and policy action driven by AI must be logged with inputs, outputs, model version, and human reviewer identification. Logs must support regulatory audits, customer disputes, and discrimination investigations.
This is where many insurers struggle. Traditional policy administration systems weren’t designed for AI decision logging. Manual compliance documentation creates audit gaps. GLACIS provides automated, cryptographic evidence generation that satisfies Article 12 requirements with tamper-evident logging.
Fairness and Discrimination Requirements
Insurance AI faces heightened scrutiny for discriminatory outcomes. The EU AI Act addresses this through multiple provisions:
Article 10: Data Governance
Training data must be "examined in view of possible biases that are likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to discrimination." For insurance, this means:
- Historical underwriting data may embed discriminatory patterns
- Proxy variables (ZIP code, occupation) may correlate with protected characteristics
- Insurers must document bias testing and mitigation measures
Intersection with Existing Law
The AI Act supplements, doesn’t replace, existing anti-discrimination frameworks:
- Gender Directive (2004/113/EC): Prohibits gender-based pricing in insurance (post-2012)
- Racial Equality Directive: Prohibits discrimination based on race or ethnic origin
- GDPR Article 22: Rights related to automated decision-making
US Regulatory Comparison
Unlike the EU’s comprehensive approach, US insurance AI regulation is fragmented across state insurance commissioners and lacks federal AI-specific legislation.
NAIC Model Bulletin (2023)
The National Association of Insurance Commissioners issued a Model Bulletin on the Use of Artificial Intelligence Systems by Insurers in December 2023. It provides guidance but isn’t binding law:
- Insurers must establish AI governance frameworks
- AI outcomes must comply with existing unfair discrimination laws
- Transparency requirements for AI-driven decisions
- Human oversight of AI systems
Colorado SB21-169
Colorado’s law is the most comprehensive US state regulation, requiring insurers to:
- Test AI systems for unfair discrimination before deployment
- Document testing methodologies and results
- Submit governance reports to the Division of Insurance
Key Differences: EU vs. US
| Aspect | EU AI Act | US (State-Level) |
|---|---|---|
| Approach | Process-based requirements | Outcome-based (no unfair discrimination) |
| Scope | Life and health explicitly high-risk | All lines, varying by state |
| Enforcement | Centralized (EUR 15M penalties) | State commissioners, varying penalties |
| Documentation | Prescriptive (Annex IV) | General governance requirements |
| Conformity | Pre-market assessment required | Post-deployment oversight |
Implementation Checklist
For insurers with high-risk AI systems, use this checklist to track compliance progress toward the August 2026 deadline:
High-Risk Insurance AI Compliance Checklist
Phase 1: Assessment (Months 1-2)
- Inventory all AI systems used in underwriting, pricing, and claims
- Classify each system against Annex III criteria
- Document intended purpose and deployment context
- Identify affected natural persons (EU residents)
Phase 2: Gap Analysis (Months 2-3)
- Assess current risk management processes against Article 9
- Evaluate data governance and bias testing (Article 10)
- Audit existing logging capabilities against Article 12
- Review human oversight mechanisms (Article 14)
Phase 3: Implementation (Months 3-9)
- Implement continuous risk management system
- Deploy automated logging with tamper-evident records
- Prepare technical documentation per Annex IV
- Establish human oversight workflows for adverse decisions
- Conduct bias testing and document results
Phase 4: Conformity (Months 9-12)
- Complete internal conformity assessment
- Prepare EU declaration of conformity
- Establish post-market monitoring procedures
- Train staff on compliance requirements
Frequently Asked Questions
Is insurance underwriting AI high-risk under the EU AI Act?
Yes, for life and health insurance. The EU AI Act Annex III explicitly classifies AI systems used for "risk assessment and pricing in relation to natural persons in the case of life and health insurance" as high-risk. Property, casualty, and commercial insurance AI may not be high-risk unless they materially affect access to essential services.
What makes insurance AI high-risk under Annex III?
Insurance AI is high-risk when it affects "access to and enjoyment of essential private services" per Annex III, Category 5(b). Specifically, AI used for risk assessment and pricing of life and health insurance for natural persons is explicitly listed. The key factors are: individual (not commercial) insurance, life or health coverage, and AI involvement in pricing or underwriting decisions.
Is property and casualty insurance AI high-risk?
Not explicitly. The EU AI Act specifically names life and health insurance in Annex III. Property, casualty, auto, and commercial lines are not explicitly listed. However, if AI in these lines materially affects individuals’ access to essential services (e.g., denying homeowners insurance in ways that prevent home purchases), regulators may argue it falls within the spirit of Annex III.
What logging requirements apply to high-risk insurance AI?
Article 12 requires high-risk AI systems to have automatic logging capabilities that record: events during operation, input data or references to it, identification of natural persons involved in verification, and timestamps. Logs must enable traceability of AI decisions throughout the system’s lifecycle and be retained appropriately for audits and investigations.
How does the EU AI Act compare to US insurance AI regulation?
The US lacks federal AI regulation for insurance. Instead, state insurance commissioners regulate AI through existing unfair discrimination laws and the NAIC Model Bulletin on AI (2023). Colorado’s SB21-169 is the most comprehensive state law, requiring insurers to test AI for unfair discrimination. Unlike the EU AI Act’s prescriptive requirements, US regulation focuses on outcomes (no unfair discrimination) rather than process.
When must insurance companies comply with EU AI Act high-risk requirements?
High-risk AI systems must achieve full compliance by August 2, 2026. This includes implementing risk management systems (Article 9), data governance (Article 10), technical documentation (Article 11), logging (Article 12), transparency (Article 13), human oversight (Article 14), and accuracy/robustness measures (Article 15). Organizations should begin compliance work immediately given the 6-12 month implementation timeline.
What are the penalties for non-compliant insurance AI?
Penalties for non-compliance with high-risk AI requirements reach up to EUR 15 million or 3% of global annual turnover, whichever is higher. For insurers, this could be substantial. Additionally, non-compliant AI systems cannot be placed on the EU market, potentially disrupting business operations across EU member states.