Compliance Guide • Updated December 2025

EU AI Act Compliance Guide

Complete guide to Regulation 2024/1689. Risk categories, compliance timelines, conformity requirements, and implementation roadmap.

26 min read 6,500+ words
Joe Braidwood
Joe Braidwood
CEO, GLACIS
26 min read

Executive Summary

The EU AI Act (Regulation 2024/1689) entered into force August 1, 2024, establishing the world's first comprehensive regulatory framework for artificial intelligence. The regulation classifies AI systems into four risk categories with escalating requirements—from outright bans to transparency obligations—and imposes penalties reaching €35 million or 7% of global annual turnover, whichever is higher.[1]

The regulation's phased implementation began with prohibited AI practices banned February 2, 2025. General Purpose AI (GPAI) models face compliance deadlines August 2, 2025. High-risk AI systems must achieve full conformity by August 2, 2026—eight months from now. Medical AI devices have extended timelines through August 2027, though ~75% still require third-party notified body assessments costing €10,000-€100,000.[2][3]

Key finding: Organizations deploying AI in the EU or serving EU customers must act immediately. Unlike GDPR's grace period, the AI Act's 24-month high-risk timeline is already half elapsed. Building conformity infrastructure—risk management systems, technical documentation, quality management, logging capabilities—requires 6-12 months minimum.

€35M
Maximum Fine[1]
Aug 2025
GPAI Deadline[1]
6 Tiers
Risk Categories[1]
27 States
EU Member States[1]

In This Guide

What is the EU AI Act?

The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework regulating artificial intelligence systems. Adopted by the European Parliament on March 13, 2024, and entering into force August 1, 2024, it establishes harmonized rules for AI development, deployment, and use across all 27 EU member states.[1]

History and Legislative Process

The European Commission proposed the AI Act on April 21, 2021, as part of its digital strategy. After three years of trilogue negotiations between the Commission, Parliament, and Council, political agreement was reached December 9, 2023. The final text passed with 523 votes in favor, 46 against, and 49 abstentions.[4]

The regulation was published in the Official Journal of the European Union (EUR-Lex) on July 12, 2024, as Regulation (EU) 2024/1689, comprising 180 articles and 13 annexes spanning 144 pages.[1]

Scope and Applicability

The AI Act applies to:

The regulation defines an "AI system" per Article 3(1) as "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."[1]

Key Objectives

The regulation balances promoting AI innovation with protecting fundamental rights, health, safety, and democratic values. Article 1 establishes objectives including:

Risk Categories

The AI Act employs a risk-based approach, classifying AI systems into four tiers with escalating regulatory requirements based on potential harm to health, safety, and fundamental rights.

Prohibited AI Systems

Article 5 — Banned outright due to unacceptable risks to fundamental rights and human dignity. Effective February 2, 2025.

Examples:

  • Social scoring systems evaluating or classifying people based on behavior, socio-economic status, or personal characteristics
  • Untargeted scraping of facial images from internet or CCTV for facial recognition databases
  • Emotion recognition in workplace and educational settings (with limited exceptions)
  • Manipulative AI exploiting vulnerabilities of age, disability, or socio-economic circumstances
  • Real-time remote biometric identification in public spaces by law enforcement (narrow exceptions)

High-Risk AI Systems

Articles 6-7, Annex III — Pose significant risks to health, safety, or fundamental rights. Subject to strict requirements before and during market placement. Full compliance by August 2, 2026.

Eight Categories (Annex III):

  • Biometric identification and categorization: Real-time/post remote biometric ID, emotion recognition (limited contexts)
  • Critical infrastructure: Safety components in road traffic, water, gas, heating, electricity management
  • Education and training: Determining educational institution access, evaluation of learning outcomes, exam supervision
  • Employment: Recruitment, task allocation, promotion decisions, performance monitoring, termination decisions
  • Essential services: Creditworthiness assessment, insurance pricing/underwriting, emergency dispatch prioritization
  • Law enforcement: Individual risk assessment, polygraphs, emotion detection, deep fake detection, evidence evaluation
  • Migration and asylum: Application examination, risk assessment, verification of authenticity of travel documents
  • Justice and democratic processes: Assisting judicial authorities in researching and interpreting facts and law

Limited Risk AI Systems

Article 50 — Specific transparency obligations to ensure users are aware they are interacting with AI. Minimal regulatory burden.

Examples:

  • Chatbots and conversational agents (must disclose they are AI)
  • Emotion recognition systems (limited contexts, must inform users)
  • Biometric categorization systems (must inform data subjects)
  • Deep fakes and AI-generated content (must be labeled as synthetic)

Minimal Risk AI Systems

No regulatory obligations beyond existing product safety and liability rules. Voluntary codes of conduct encouraged (Article 95).

Examples:

  • AI-enabled video games and spam filters
  • Inventory management and process optimization systems
  • AI-powered recommendation engines (non-manipulative)
  • Most enterprise productivity and automation tools

Timeline & Deadlines

The AI Act implements a staggered enforcement timeline, prioritizing the most harmful applications while allowing more time for complex systems. Critical deadlines are approaching rapidly.

EU AI Act Implementation Timeline

Date Milestone Requirements Status
Aug 1, 2024 Entry into Force Regulation published and legally effective COMPLETE
Feb 2, 2025 Prohibited AI Ban Article 5 prohibitions enforceable (6 months after entry) ACTIVE
Aug 2, 2025 GPAI Compliance Obligations for general purpose AI models (12 months) 7 MONTHS
Aug 2, 2026 High-Risk Systems Full compliance for high-risk AI (24 months) 19 MONTHS
Aug 2, 2027 Medical AI Extended High-risk AI as medical device safety components (36 months) 31 MONTHS

Critical Timeline Warning

The August 2, 2026 high-risk deadline is 19 months away. Organizations with high-risk AI systems should already be implementing risk management frameworks, quality management systems, and technical documentation processes. Conformity assessments via notified bodies take 3-12 months and cost €10,000-€100,000. Starting in 2026 will be too late.

Requirements by Category

Prohibited AI Systems (Article 5)

Prohibited practices became illegal February 2, 2025. Organizations must immediately cease any:

Penalty: Up to €35 million or 7% of total worldwide annual turnover, whichever is higher (Article 99).[1]

High-Risk AI Systems (Articles 8-15)

High-risk AI systems face comprehensive requirements across the entire lifecycle. Providers must implement:

Article 9: Risk Management System

Continuous iterative process throughout the AI system lifecycle comprising:

  • Identification and analysis of known and foreseeable risks
  • Estimation and evaluation of risks that may emerge during use
  • Evaluation of other possibly arising risks based on post-market monitoring
  • Adoption of suitable risk management measures

Article 10: Data and Data Governance

Training, validation, and testing datasets must be subject to appropriate data governance and management practices:

  • Relevant, sufficiently representative, and free of errors
  • Consideration of geographic, contextual, behavioral, or functional settings
  • Examination for possible biases and mitigation where appropriate
  • Completeness considering intended purpose and reasonably foreseeable misuse

Article 11: Technical Documentation

Documentation prepared before placing on market and kept up to date, including:

  • General description of the AI system (intended purpose, developer, version)
  • Detailed description of system elements and development process
  • Detailed information about monitoring, functioning, and control
  • Risk management system description per Article 9
  • Validation and testing procedures, results, and reports

Article 12: Record-Keeping (Logging)

Automatic recording of events (logs) throughout the AI system operation:

  • Logging capabilities ensuring traceability throughout the system lifecycle
  • Logging level appropriate to intended purpose of high-risk system
  • Records including input data period, reference database, persons involved in verification
  • Logs protected by appropriate security measures and retained for period appropriate to purpose

Article 13: Transparency and Information to Deployers

High-risk systems must be designed with sufficient transparency to enable deployers to:

  • Interpret system output and use it appropriately
  • Understand system capabilities and limitations
  • Instructions for use in appropriate digital or non-digital format
  • Information on human oversight measures per Article 14

Article 14: Human Oversight

High-risk systems shall be designed to enable effective oversight by natural persons:

  • Fully understand capacities and limitations and monitor operation
  • Remain aware of possible tendency to automatically rely on output (automation bias)
  • Correctly interpret system output considering system characteristics
  • Decide to not use the system or override output in any particular situation

Article 15: Accuracy, Robustness, and Cybersecurity

High-risk systems must achieve appropriate levels of:

  • Accuracy: Ability to provide correct output or actions
  • Robustness: Reliability against errors, faults, inconsistencies, and unexpected situations
  • Cybersecurity: Resilience against attempts to alter use, behavior, or performance through exploitation
  • Technical solutions to address AI-specific vulnerabilities including data poisoning and model evasion

Article 17: Quality Management System

Providers of high-risk systems must establish and maintain a documented quality management system ensuring:

  • Compliance strategy for regulatory requirements
  • Design, control, and quality assurance techniques and procedures
  • Post-market monitoring, reporting, and corrective action procedures
  • Examination, test, and validation procedures at design and throughout development

Penalty for high-risk non-compliance: Up to €15 million or 3% of total worldwide annual turnover (Article 99).[1]

Limited Risk AI Systems (Article 50)

Limited-risk systems face only transparency obligations:

Penalty: Up to €7.5 million or 1% of total worldwide annual turnover (Article 99).[1]

High-Risk AI Systems: Detailed Analysis

Classification Criteria (Article 6)

An AI system is considered high-risk if:

  1. The AI system is intended to be used as a safety component of a product covered by EU harmonization legislation (Annex I), OR
  2. The AI system itself is a product covered by EU harmonization legislation (Annex I) and requires third-party conformity assessment, OR
  3. The AI system falls under one of the eight high-risk use cases listed in Annex III

Annex III High-Risk Use Cases (Expanded)

High-Risk AI Categories (Annex III)

Category Specific Use Cases Examples
1. Biometrics Remote biometric identification (real-time/post), biometric categorization Airport facial recognition, emotion detection at borders
2. Critical Infrastructure Safety components managing road traffic, water, gas, heating, electricity Traffic signal AI, power grid management systems
3. Education Determining access, assessing students, detecting cheating Automated admissions, AI exam proctoring, grading systems
4. Employment Recruitment, promotion, task allocation, monitoring, termination Resume screening AI, performance monitoring, layoff decisions
5. Essential Services Creditworthiness, insurance pricing, emergency dispatch Loan approval AI, health insurance underwriting
6. Law Enforcement Risk assessment, polygraphs, emotion detection, evidence evaluation Recidivism prediction, crime forecasting, lie detection
7. Migration/Asylum Examination of applications, risk assessment, travel document verification Automated visa screening, asylum claim evaluation
8. Justice Assisting judicial authorities in researching/interpreting facts and law Legal research AI, case outcome prediction

Conformity Assessment Procedures (Articles 43-44)

Before placing high-risk AI systems on the market, providers must undergo conformity assessment to demonstrate compliance. Two pathways exist:

Internal Control (Article 43)

Provider conducts self-assessment based on:

  • Technical documentation (Annex IV)
  • Quality management system implementation
  • Post-market monitoring plan
  • Drawing up EU declaration of conformity

Available for most high-risk systems

Notified Body Assessment (Article 43)

Third-party assessment required for:

  • Biometric identification systems
  • AI systems covered by other EU regulations requiring notified body involvement
  • Medical AI devices (approximately 75% of AI medical devices)

Cost: €10,000-€100,000 | Timeline: 3-12 months[2][3]

General Purpose AI (GPAI) Requirements

The AI Act introduces specific obligations for general-purpose AI models—systems trained on large datasets capable of serving a wide range of tasks. These provisions target foundation model providers like OpenAI, Anthropic, Google, and Meta.

Definition and Classification (Article 3)

A general-purpose AI model is defined as an AI model "trained on large amounts of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market."[1]

GPAI models are further classified into two tiers based on compute thresholds:

Standard GPAI Models

Article 53: Models that do not meet the systemic risk threshold (compute < 10^25 FLOPs). Subject to baseline transparency requirements.

Requirements:

  • Technical documentation per Annex XI (architecture, training data, compute resources)
  • Information and documentation to downstream providers to enable compliance
  • Copyright policy including sufficiently detailed summary of training data content
  • Publicly available summary of training data subject to copyright protection

GPAI Models with Systemic Risk

Article 55: Models with high impact capabilities (compute ≥ 10^25 FLOPs or equivalent) posing systemic risks due to reach and capabilities. Enhanced obligations.

Additional Requirements:

  • Model evaluation per standardized protocols including adversarial testing
  • Assessment and mitigation of systemic risks (including cybersecurity threats)
  • Tracking, documenting, and reporting serious incidents to AI Office and national authorities
  • Ensuring adequate cybersecurity protection for model and physical infrastructure

Examples: GPT-5.2, Claude Opus 4.5, Gemini 3 Pro, Llama 4 likely meet threshold

Compliance Deadline

GPAI model obligations become enforceable August 2, 2025—seven months from now. Providers must prepare technical documentation, implement copyright policies, and for systemic-risk models, establish evaluation and incident reporting procedures.[1]

Penalties & Enforcement

The AI Act establishes one of the most stringent penalty regimes in technology regulation, mirroring GDPR's structure with fines tied to global annual turnover.

Penalty Tiers (Article 99)

EU AI Act Penalty Structure

Violation Type Maximum Fine Articles
Prohibited AI practices €35M or 7% global revenue Article 5
Non-compliance with high-risk requirements €15M or 3% global revenue Articles 8-15, 17, 26
Non-compliance with GPAI obligations €15M or 3% global revenue Articles 53, 55
Providing incorrect information €7.5M or 1.5% global revenue Article 71 (authority requests)
Non-compliance with transparency obligations €7.5M or 1% global revenue Article 50 (limited-risk AI)

Important: "Global annual turnover" means worldwide revenue for the preceding financial year. For multinational corporations, 7% could reach billions of euros. Whichever amount is higher applies—meaning even startups face €35M maximum fines for prohibited AI practices.[1]

Enforcement Structure

The AI Act establishes a multi-layered enforcement architecture:

EU AI Office (Article 64)

Central coordination body within the European Commission responsible for GPAI model oversight, implementing acts, and cross-border enforcement coordination. Exclusive competence over systemic-risk GPAI models.

National Competent Authorities (Article 70)

Each member state must designate at least one authority to enforce the AI Act within its territory. National authorities have investigatory powers including access to training data, source code, and algorithms. May impose penalties per Article 99.

Notified Bodies (Articles 31-39)

Independent third-party conformity assessment bodies designated by member states to conduct assessments of high-risk AI systems requiring external certification (e.g., biometric systems, medical devices). Must be accredited per ISO 17065.

European AI Board (Article 65)

Expert group consisting of national authorities promoting consistent application across member states, advising the Commission, and contributing to international AI governance cooperation.

Market Surveillance Powers (Article 74)

National authorities have extensive investigatory powers including:

Compliance Roadmap

Organizations should implement a phased approach aligned with regulatory deadlines and system risk classification. The August 2026 high-risk deadline leaves minimal margin for delay.

GLACIS Framework

EU AI Act Compliance Roadmap

1

AI System Inventory & Risk Classification (Month 1)

Catalog all AI systems across the organization. Classify each system per AI Act risk categories (prohibited, high-risk, limited-risk, minimal-risk) using Annex III criteria. Identify systems requiring immediate action (prohibited) vs. August 2026 deadline (high-risk). Document intended purpose, deployment context, and affected populations.

2

High-Risk System Prioritization (Month 1-2)

For high-risk systems, assess current state against Articles 9-15 requirements. Identify gaps in risk management, data governance, logging, transparency, human oversight, and cybersecurity. Prioritize systems by business criticality and compliance gap severity. Determine which systems require notified body assessment vs. internal control.

3

Risk Management System Implementation (Month 2-4)

Establish continuous risk management per Article 9. Implement processes for identifying foreseeable risks, estimating harm likelihood and severity, evaluating post-market monitoring findings, and adopting mitigation measures. Document risk management activities per Annex IV technical documentation requirements. Integrate with existing ISO 42001 or NIST AI RMF frameworks where implemented.

4

Technical Documentation & Logging (Month 3-6)

Prepare technical documentation per Annex IV covering system description, development process, data governance, monitoring procedures, validation results, and risk management. Implement automated logging capabilities per Article 12 ensuring traceability of inputs, outputs, and decisions. Ensure logs are tamper-evident and retained appropriately. Generate evidence that controls execute—not just policies documenting intent.

5

Quality Management System & Conformity Assessment (Month 4-9)

Establish quality management system per Article 17 covering compliance strategy, design controls, post-market monitoring, and corrective actions. For systems requiring notified body assessment, initiate engagement 6-9 months before August 2026 deadline (assessments take 3-12 months). For internal control pathway, prepare EU declaration of conformity and affix CE marking.

6

Post-Market Monitoring & Continuous Compliance (Ongoing)

Implement post-market monitoring systems tracking performance, incidents, and user feedback. Establish serious incident reporting procedures per Article 73. Maintain technical documentation and update as systems evolve. Conduct periodic reviews ensuring ongoing compliance with Articles 9-15. Prepare for market surveillance authority inspections and information requests per Article 74.

Critical insight: Organizations waiting until 2026 will face notified body capacity constraints, rushed implementations prone to defects, and potential enforcement actions for non-compliance. Start now—the deadline is closer than it appears.

GPAI Model Provider Roadmap

Foundation model providers face the August 2, 2025 deadline—seven months away. Immediate priorities include:

All GPAI Models (Article 53)

  • Prepare technical documentation (Annex XI)
  • Document training data sources and compute
  • Publish copyright policy and training data summary
  • Provide downstream compliance documentation

Systemic-Risk GPAI (Article 55)

  • Conduct model evaluation with adversarial testing
  • Assess and document systemic risks
  • Implement incident tracking and reporting
  • Establish cybersecurity protections

Frequently Asked Questions

Does the EU AI Act apply to US companies?

Yes. The AI Act has extraterritorial reach similar to GDPR. It applies to providers placing AI systems on the EU market or putting them into service in the EU, regardless of the provider's location. It also applies where AI output is used in the EU, even if the provider and deployer are both located outside the EU. US companies serving EU customers or processing EU data must comply.

How do I know if my AI system is high-risk?

Check if your system falls under Annex III categories: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration/asylum, or justice. Also check if it's a safety component of a product covered by Annex I harmonization legislation (medical devices, machinery, etc.). If uncertain, document your risk assessment rationale—regulators may disagree with your classification.

What is a notified body and when do I need one?

Notified bodies are independent third-party organizations designated by member states to conduct conformity assessments. You need one if your high-risk AI system is: (1) a biometric identification or categorization system, or (2) a product covered by EU harmonization legislation requiring third-party assessment (e.g., most medical devices). Notified body assessments cost €10,000-€100,000 and take 3-12 months.

Can I use ChatGPT or Claude in my high-risk AI system?

Yes, but you bear compliance responsibility as the deployer. General-purpose AI models (like GPT-5.2, Claude Opus 4.5) are subject to GPAI obligations (Articles 53-55), but if you integrate them into a high-risk use case (e.g., employment decisions, creditworthiness), you become the "provider" of the high-risk system and must ensure full compliance with Articles 8-15 including risk management, logging, human oversight, and conformity assessment.

How does the EU AI Act interact with GDPR?

The AI Act and GDPR are complementary. GDPR governs personal data processing; the AI Act governs AI systems. AI systems processing personal data must comply with both. Key overlaps: data governance (Article 10 AI Act, Articles 5-6 GDPR), transparency (Article 13 AI Act, Articles 13-14 GDPR), and automated decision-making (Article 14 AI Act, Article 22 GDPR). Non-compliance can trigger penalties under both regulations.

What should I do if my AI system causes harm after August 2026?

Report serious incidents to national competent authorities within 15 days per Article 73. A serious incident is any incident leading to death, serious health damage, serious fundamental rights disruption, or serious property/environmental damage. Implement corrective actions, update risk management documentation, and notify affected deployers. Failure to report can result in penalties. Incident response planning should be part of your quality management system (Article 17).

References

  1. [1] European Union. "Regulation (EU) 2024/1689 of the European Parliament and of the Council." Official Journal of the European Union, July 12, 2024. EUR-Lex 32024R1689
  2. [2] European Commission. "Questions and Answers: Artificial Intelligence Act." March 13, 2024. europa.eu
  3. [3] European Parliament. "EU AI Act: First Regulation on Artificial Intelligence." News release, March 13, 2024. europarl.europa.eu
  4. [4] European Parliament. "Artificial Intelligence Act: MEPs Adopt Landmark Law." Press release, March 13, 2024. europarl.europa.eu
  5. [5] European AI Office. "AI Office Governance Structure." European Commission, 2024. ec.europa.eu
  6. [6] NIST. "Artificial Intelligence Risk Management Framework (AI RMF 1.0)." January 2023. nist.gov
  7. [7] ISO/IEC. "ISO/IEC 42001:2023 Information Technology — Artificial Intelligence — Management System." December 2023. iso.org
  8. [8] European Commission. "Annexes to Regulation (EU) 2024/1689." EUR-Lex, July 12, 2024. EUR-Lex Annexes
  9. [9] Future of Life Institute. "EU Artificial Intelligence Act: Analysis and Recommendations." Policy report, 2024. futureoflife.org
  10. [10] Stanford HAI. "AI Index Report 2025." Stanford Human-Centered AI, March 2025. hai.stanford.edu
  11. [11] European Commission. "EU AI Act: Implementation Timeline and Milestones." Digital Strategy Portal, 2024. ec.europa.eu
  12. [12] Holistic AI. "EU AI Act Compliance Checklist." Regulatory guidance, 2024. holisticai.com

EU AI Act Compliance in Days, Not Months

GLACIS generates cryptographic evidence that your AI controls execute correctly—mapped to EU AI Act Articles 9-15, ISO 42001, and NIST AI RMF. Get audit-ready documentation before the August 2026 deadline.

Start Your Compliance Sprint

Related Guides