Ambient AI Scribe Privacy Read Now
🇫🇷 France Implementation Guide • December 2025

EU AI Act France Implementation Guide

National competent authorities, CNIL oversight, Article 12 logging requirements, and sector-specific compliance for Europe’s second-largest economy.

18 min read 3,200+ words
Joe Braidwood
Joe Braidwood
CEO, GLACIS
18 min read

Executive Summary

France, Europe’s second-largest economy with a rapidly growing AI sector projected to reach €77.68 billion by 2032, is implementing the EU AI Act through a multi-authority governance model. The DGCCRF serves as the coordinating authority, while the CNIL oversees fundamental rights and personal data processing, and sectoral regulators including ACPR (financial services) and ARCOM (media) handle domain-specific conformity assessments.

In February 2025, France established INESIA (Institut National d’Évaluation et de Sécurité de l’Intelligence Artificielle) as its AI Safety Institute, coordinating ANSSI, Inria, LNE, and PEReN on security risk analysis and performance testing. This positions France among the first EU member states to operationalize Article 70’s national competent authority requirements.

Key finding: Organizations operating in France face overlapping compliance obligations under the EU AI Act, GDPR, and the Loi Informatique et Libertés. The August 2026 high-risk deadline requires immediate action—particularly for healthcare AI (regulated by ANSM and HAS), financial services AI (ACPR), and public administration systems subject to heightened fundamental rights scrutiny.

€77.68B
AI Market by 2032
1,000+
AI Startups
5
Competent Authorities
Aug 2026
High-Risk Deadline

In This Guide

France’s Implementation Status

France has moved quickly to implement the EU AI Act, positioning itself as a leader in AI governance while maintaining its ambition to be Europe’s AI innovation hub. The government’s approach balances regulatory compliance with support for its thriving AI ecosystem—over 1,000 startups and major players including Mistral AI, Dataiku, and Owkin.

Government Strategy: France 2030 and AI

The EU AI Act implementation aligns with France’s broader "France 2030" investment plan and its national AI strategy. In February 2025, the government launched the third stage of this strategy following the interministerial AI committee meeting. Key priorities include:

INESIA: France’s AI Safety Institute

In February 2025, France established the Institut National d’Évaluation et de Sécurité de l’Intelligence Artificielle (INESIA) to coordinate national actors on AI security, risk analysis, and regulatory implementation. INESIA brings together:

ANSSI (Agence nationale de la sécurité des systèmes d’information)

France’s national cybersecurity agency, responsible for AI system security assessment and incident response coordination.

Inria (Institut national de recherche en informatique)

Leading digital science research institute providing technical expertise on AI system evaluation methodologies.

LNE (Laboratoire national de métrologie et d’essais)

National testing laboratory conducting AI performance testing and conformity assessment support.

PEReN (Pôle d’expertise de la régulation numérique)

Digital regulation expertise center supporting technical analysis for enforcement authorities.

National Competent Authorities

France has adopted a multi-authority governance model for EU AI Act enforcement, distributing responsibilities across existing regulatory bodies based on their domain expertise. This approach mirrors France’s tradition of sectoral regulation while ensuring coordinated oversight.

French AI Act Competent Authorities

Authority Role AI Act Responsibilities
DGCCRF Coordinating Authority Single point of contact, coordination of sectoral authorities, commercial manipulation oversight
CNIL Data Protection & Rights Personal data processing, biometric systems, emotion recognition, prohibited practices involving fundamental rights
ACPR Financial Services Credit scoring, insurance underwriting, financial services AI conformity
ARCOM Media & Communications Information integrity, deepfakes, AI-generated media content
Défenseur des droits Rights Protection Discrimination monitoring, fundamental rights impact assessment

CNIL’s Expanded Role

The Commission nationale de l’informatique et des libertés (CNIL) plays a central role in French AI governance. Established in 1978 under the Loi Informatique et Libertés, CNIL has published its 2025-2028 strategic plan prioritizing artificial intelligence alongside children’s rights, cybersecurity, and everyday digital uses.

Under the EU AI Act, CNIL will oversee:

Implementation Timeline

France’s implementation follows the EU-wide timeline while adding national-specific milestones for authority designation and sectoral guidance.

France EU AI Act Timeline

Date Milestone France-Specific Actions Status
Aug 1, 2024 Entry into Force AI Act published in OJ; France begins authority designation COMPLETE
Feb 2, 2025 Prohibited AI Ban CNIL and DGCCRF begin enforcement; INESIA established ACTIVE
Aug 2, 2025 GPAI & Authority Deadline National competent authorities fully designated; GPAI obligations enforceable 7 MONTHS
Aug 2, 2026 High-Risk Compliance Full enforcement of Articles 8-15; conformity assessments required 19 MONTHS
Aug 2, 2027 Medical AI Extended ANSM/HAS coordination for medical device AI systems 31 MONTHS

Critical Timeline Note

French organizations deploying high-risk AI systems have 19 months until the August 2, 2026 deadline. Given that conformity assessments via notified bodies take 3-12 months and cost €10,000-€100,000, organizations must begin compliance work immediately. CNIL has signaled it will take a "granular, GDPR-centric enforcement" approach—expect rigorous scrutiny of data processing practices.

High-Risk AI Categories in the French Market

France’s diverse economy creates significant exposure to high-risk AI classifications across multiple sectors. Understanding which AI applications fall under Annex III requirements is critical for French organizations.

Healthcare AI

Dual regulatory oversight from ANSM (medical devices) and HAS (health technology assessment). AI Act adds additional conformity requirements.

High-Risk Applications:

  • Diagnostic AI systems (radiology, pathology, dermatology)
  • Clinical decision support systems
  • Patient triage and risk stratification
  • Medical device safety components

Defense & National Security

While largely exempt from the AI Act, France has voluntarily adopted ethical principles through the Ministry of Armed Forces’ AI ethics committee (established 2020).

Key Initiatives:

  • Amiad initiative (May 2024) for sovereign defense AI
  • Voluntary ethics framework for military AI systems
  • Human oversight requirements for autonomous systems

Financial Services

ACPR oversight under the AI Act complements existing prudential requirements. Credit and insurance AI face the strictest scrutiny.

High-Risk Applications:

  • Credit scoring and loan approval systems
  • Insurance underwriting and pricing algorithms
  • Anti-fraud detection affecting customer access
  • Automated investment advisory services

Public Administration

Government AI systems face heightened scrutiny under fundamental rights protections. Défenseur des droits monitors discrimination risks.

High-Risk Applications:

  • Benefits eligibility determination
  • Tax risk assessment systems
  • Immigration and asylum processing
  • Public safety and surveillance (post-Olympics AI camera use)

Luxury & Retail

France’s luxury sector and major retailers (e.g., Carrefour) increasingly deploy AI for personalization and pricing. Most fall under limited/minimal risk unless affecting essential services.

Considerations:

  • Personalized pricing algorithms (transparency obligations)
  • Customer authentication and fraud detection
  • Employment AI for workforce management (high-risk if used for decisions)

Article 12: Logging Requirements

Article 12 of the EU AI Act mandates that high-risk AI systems must technically allow for automatic recording of events (logs) over the lifetime of the system. This is one of the most technically demanding requirements, with significant implications for French organizations deploying high-risk AI.

Core Logging Obligations

Logging capabilities must enable recording of events relevant for:

Specific Requirements for Biometric Systems

For high-risk AI systems involving biometric identification (Annex III, point 1(a)), Article 12 specifies minimum logging requirements:

Biometric System Logging Minimums

  • (a) Recording of usage periods—start date/time and end date/time of each use
  • (b) The reference database against which input data has been checked
  • (c) The input data for which the search has led to a match
  • (d) Identification of the natural persons involved in verification of results

Log Security and Retention

The AI Act requires logs to be:

French Implementation Note

While the EU AI Act doesn’t explicitly mandate cryptographic log protection, French regulatory expectations—shaped by ANSSI cybersecurity standards and CNIL’s data integrity requirements—effectively require tamper-evident logging. Organizations should implement cryptographic signatures or blockchain-based immutability for Article 12 compliance in France.

Sector-Specific Compliance Considerations

Healthcare

French healthcare AI faces a complex regulatory landscape combining the EU AI Act with existing frameworks:

ANSM (Agence nationale de sécurité du médicament)

Regulates AI as software as a medical device (SaMD). Issues guidance on treatment risks, including hidden bias in training data. Coordinates with EU Notified Bodies for conformity assessment.

HAS (Haute Autorité de Santé)

Evaluates AI health technologies for reimbursement decisions. Assesses clinical benefit, safety, and ethical implications. Published AI implementation report at February 2025 AI Action Summit.

Additional Requirements

Healthcare AI processing patient data must also comply with GDPR/Loi Informatique et Libertés health data provisions, the Référentiel de Sécurité applicable to health information systems, and HDS (Hébergement de Données de Santé) certification requirements.

Financial Services

The ACPR (Autorité de contrôle prudentiel et de résolution) will enforce AI Act requirements alongside existing prudential regulations:

Public Administration

French public sector AI deployments face heightened scrutiny given the fundamental rights implications:

Interaction with Loi Informatique et Libertés

Organizations must navigate the intersection of EU AI Act requirements with France’s data protection framework—the Loi Informatique et Libertés (Law No. 78-17 of January 6, 1978, as amended) and GDPR.

Framework Hierarchy

The legal framework now operates at both European and national levels:

Key Overlapping Requirements

Requirement Area EU AI Act GDPR/Loi IL Practical Implication
Data Governance Article 10 Articles 5-6 GDPR Training data must be lawfully collected AND meet AI Act quality standards
Transparency Article 13 Articles 13-14 GDPR AI-specific disclosures supplement GDPR privacy notices
Automated Decisions Article 14 Article 22 GDPR Human oversight requirements reinforce GDPR automated decision-making rights
Bias Prevention Article 10(2)(f) Non-discrimination law Training data must be examined for biases; mitigation required

Conformity Assessment Pathway

French organizations deploying high-risk AI must complete conformity assessment before placing systems on the market. Two pathways exist:

Internal Control (Article 43)

Self-assessment by the provider:

  • Technical documentation per Annex IV
  • Quality management system
  • Post-market monitoring plan
  • EU declaration of conformity

Available for most high-risk systems

Notified Body Assessment

Third-party assessment required for:

  • Biometric identification systems
  • Medical AI devices (most Class II+)
  • AI covered by other EU regulations requiring third-party certification

Cost: €10,000-€100,000 | Timeline: 3-12 months

French Notified Bodies

France is designating notified bodies capable of conducting AI Act conformity assessments. LNE (Laboratoire national de métrologie et d’essais), as part of INESIA, is positioned to become a key assessment body. Organizations should monitor the NANDO database for French notified body designations.

Enforcement and Penalties

France will enforce EU AI Act penalties through its multi-authority structure, with fines mirroring EU-wide maximums.

Penalty Structure in France

Violation Maximum Penalty Enforcing Authority
Prohibited AI practices €35M or 7% global revenue CNIL, DGCCRF
High-risk non-compliance €15M or 3% global revenue Sectoral authority (ACPR, ARCOM, etc.)
Incorrect information €7.5M or 1% global revenue Coordinated by DGCCRF
Transparency violations €7.5M or 1% global revenue CNIL (biometrics), ARCOM (media)

Market Surveillance Powers

French authorities can exercise extensive investigatory powers under Article 74:

Compliance Roadmap for French Organizations

Organizations operating in France should follow this phased approach to achieve EU AI Act compliance before the August 2026 deadline.

France Compliance Roadmap

EU AI Act Implementation Steps

1

AI System Inventory & Classification (Q1 2025)

Map all AI systems against Annex III high-risk categories. Identify systems under CNIL jurisdiction (personal data, biometrics) vs. sectoral authorities (ACPR, ARCOM). Document intended purpose, affected populations, and current compliance state. Flag any prohibited practices for immediate remediation.

2

Gap Assessment & Risk Management (Q1-Q2 2025)

Evaluate high-risk systems against Articles 9-15. Assess overlapping requirements under GDPR and Loi Informatique et Libertés. Identify whether internal control or notified body assessment applies. Prioritize by business criticality and compliance gap severity.

3

Article 12 Logging Implementation (Q2-Q3 2025)

Implement automated logging per Article 12 requirements. Ensure logs capture inputs, outputs, decisions, and human oversight actions. Apply ANSSI-aligned security measures including tamper-evidence. Establish retention policies appropriate to system purpose and regulatory expectations.

4

Technical Documentation & QMS (Q3-Q4 2025)

Prepare Annex IV technical documentation covering system description, development process, data governance, and risk management. Establish quality management system per Article 17. Integrate with existing ISO 42001 or NIST AI RMF frameworks. Document evidence that controls execute—not just policies.

5

Conformity Assessment (Q4 2025 - Q2 2026)

For systems requiring notified body assessment, initiate engagement 6-9 months before deadline. For internal control pathway, complete self-assessment and prepare EU declaration of conformity. Affix CE marking upon successful assessment. Register in EU database per Article 71.

6

Post-Market Monitoring (Ongoing from Aug 2026)

Implement continuous monitoring per Article 72. Establish serious incident reporting to CNIL/sectoral authorities within 15 days (Article 73). Maintain technical documentation updates as systems evolve. Prepare for market surveillance inspections by French authorities.

How GLACIS Helps with Article 12 Compliance

Article 12’s logging requirements present significant technical challenges. Organizations must implement automated event recording that enables traceability throughout the AI system lifecycle—while ensuring logs are secured against tampering and retained appropriately.

GLACIS addresses these challenges through continuous attestation technology that generates cryptographic evidence of AI control execution:

Tamper-Evident Logging

Cryptographically signed logs prove that recorded events haven’t been modified. This exceeds Article 12’s security requirements and aligns with ANSSI expectations for tamper-evident audit trails.

Automated Event Recording

GLACIS captures inputs, outputs, decisions, and human oversight actions automatically—satisfying Article 12(1)’s requirement for systems that "technically allow for the automatic recording of events."

Risk Identification Support

Continuous monitoring identifies situations where AI systems may present risks or require substantial modification—directly supporting Article 12(2)(a) compliance.

Regulatory-Ready Evidence

Generate audit-ready evidence packs mapped to EU AI Act Articles 9-15, CNIL requirements, and ISO 42001. Prepare for French regulatory inspections with documentation that proves controls execute—not just policies that exist.

Frequently Asked Questions

Who enforces the EU AI Act in France?

France uses a multi-authority model. DGCCRF serves as the coordinating authority and single point of contact. CNIL oversees fundamental rights and personal data aspects, including biometric systems and prohibited practices. ACPR regulates financial services AI, ARCOM handles media-related AI, and the Défenseur des droits monitors discrimination. INESIA coordinates technical safety assessment.

What is INESIA and what role does it play?

INESIA (Institut National d’Évaluation et de Sécurité de l’Intelligence Artificielle) is France’s AI Safety Institute, established in February 2025. It coordinates ANSSI (cybersecurity), Inria (research), LNE (testing), and PEReN (digital regulation) on AI security risk analysis, regulatory implementation, and performance testing. INESIA supports national authorities with technical expertise for AI Act enforcement.

How does the EU AI Act interact with French data protection law?

The AI Act complements GDPR and the Loi Informatique et Libertés. AI systems processing personal data must comply with all three frameworks. Key overlaps include data governance (Article 10 AI Act, GDPR Articles 5-6), transparency (Article 13 AI Act, GDPR Articles 13-14), and automated decision-making (Article 14 AI Act, GDPR Article 22). CNIL enforces data protection requirements under all frameworks.

What are Article 12 logging requirements?

Article 12 requires high-risk AI systems to automatically record events throughout their lifetime. Logs must enable traceability, identify risk situations, support post-market monitoring, and monitor operations. For biometric systems, specific minimums include recording usage periods, reference databases, matching input data, and persons involved in verification. Logs must be secured and retained appropriately.

What are the key deadlines for France?

Prohibited AI practices became banned February 2, 2025. National competent authorities must be designated by August 2, 2025 (France has largely completed this). GPAI model obligations apply August 2, 2025. High-risk AI systems must comply by August 2, 2026. Medical AI devices have extended timelines through August 2027.

Which French sectors face the highest compliance burden?

Healthcare AI faces dual oversight from ANSM and HAS plus AI Act requirements. Financial services AI is regulated by ACPR under both AI Act and prudential rules. Public administration AI faces heightened fundamental rights scrutiny. Defense AI, while partially exempt, follows voluntary ethical principles from the Ministry of Armed Forces.

What penalties apply for violations in France?

Penalties mirror EU standards: prohibited AI practices face up to €35 million or 7% of global annual turnover. High-risk non-compliance incurs up to €15 million or 3% of global turnover. Transparency violations carry up to €7.5 million or 1% of global turnover. French authorities can also require system withdrawal from the market.

References

  1. European Union. "Regulation (EU) 2024/1689 of the European Parliament and of the Council." Official Journal of the European Union, July 12, 2024. EUR-Lex 32024R1689
  2. CNIL. "IA, mineurs, cybersécurité, quotidien numérique : la CNIL publie son plan stratégique 2025-2028." January 16, 2025. cnil.fr
  3. Direction générale des Entreprises. "Les autorités compétentes pour la mise en œuvre du règlement européen sur l’intelligence artificielle." 2025. entreprises.gouv.fr
  4. CNIL. "Entry into force of the European AI Regulation: the first questions and answers from the CNIL." 2024. cnil.fr
  5. EU Artificial Intelligence Act. "Article 12: Record-keeping." artificialintelligenceact.eu
  6. Chambers and Partners. "Artificial Intelligence 2025 - France: Trends and Developments." 2025. chambers.com
  7. Fortune Business Insights. "France Artificial Intelligence Market Size, Share." 2024. fortunebusinessinsights.com
  8. Elysée. "Make France an AI Powerhouse." AI Action Summit, February 2025. elysee.fr
  9. Herbert Smith Freehills. "AI Tracker France." 2025. hsfkramer.com
  10. Technology’s Legal Edge. "State of the Act: EU AI Act implementation in key Member States." November 2025. technologyslegaledge.com

EU AI Act Compliance for France in Days, Not Months

GLACIS generates cryptographic evidence that your AI controls execute correctly—mapped to EU AI Act Articles 9-15, CNIL requirements, and ISO 42001. Get audit-ready documentation before the August 2026 deadline.

Start Your Compliance Sprint

Related Guides