Germany’s Implementation Status
Germany is implementing the EU AI Act through a combination of directly applicable EU regulation and national implementing legislation. As the EU’s largest economy with significant AI deployment across automotive, manufacturing, healthcare, and financial services, Germany’s implementation approach has outsized influence on how the regulation works in practice.
National Implementing Legislation
The German government published the first draft of its national implementation law in late August 2025: the KI-Marktuberwachungsgesetz und Innovationsforderungsgesetz (KI-MIG)—the "AI Market Surveillance and Innovation Promotion Act." This law establishes:[1][2]
- Bundesnetzagentur (BNetzA) as the primary market surveillance authority for AI Act compliance
- Decentralized supervisory structure preserving sectoral authorities’ roles in their domains
- Independent Market Surveillance Chamber (UKIM) for sensitive high-risk AI oversight
- AI regulatory sandboxes for innovation under regulatory supervision
- AI Service Desk at BNetzA to support business compliance
The legislative process was delayed by Germany’s unscheduled parliamentary elections in early 2025. The new Federal Government is expected to enact the KI-MIG in the latter half of 2025, though the EU AI Act’s requirements are directly applicable regardless of national legislation status.[1]
Direct Applicability Note
Unlike directives requiring national transposition, the EU AI Act (Regulation 2024/1689) is directly applicable across all member states. German organizations must comply with its requirements regardless of national implementing legislation status. The KI-MIG establishes enforcement mechanisms, but doesn’t change substantive obligations.
National Competent Authority
Article 70 of the EU AI Act requires each member state to designate at least one national competent authority to supervise application and implementation. Germany’s approach reflects its federal structure and existing regulatory landscape.
Bundesnetzagentur (Federal Network Agency)
The Bundesnetzagentur (BNetzA) has been designated as Germany’s primary market surveillance authority for the EU AI Act. BNetzA, an independent higher federal authority under the Federal Ministry for Economic Affairs and Climate Action, already regulates telecommunications, postal services, electricity, gas, and railway markets.[1]
BNetzA’s AI Act responsibilities include:
Market Surveillance Coordination
Lead authority for AI Act compliance monitoring, ensuring coordinated supervision across sectors and providing centralized resources for other authorities. Conducts inspections, processes complaints, and coordinates cross-border enforcement.
AI Service Desk
Operational since July 2025, the AI Service Desk provides guidance to organizations on AI Act compliance, risk classification questions, and documentation requirements. Acts as first point of contact for businesses deploying AI in Germany.
AI Lab
Technical testing facility for evaluating AI system compliance, conducting conformity assessments, and providing technical expertise for enforcement actions.
Regulatory Sandboxes
BNetzA will operate AI regulatory sandboxes as required by Article 57 of the AI Act, allowing organizations to develop and test AI systems under controlled conditions with regulatory guidance before full market deployment.
Sectoral Authorities
Germany’s draft implementation maintains a decentralized supervisory structure where existing sectoral regulators retain AI-related market surveillance responsibilities in their domains:[1][2]
German Sectoral AI Authorities
| Authority | Domain | AI Act Relevance |
|---|---|---|
| BfArM | Medical devices, in vitro diagnostics | Medical AI, diagnostic algorithms, clinical decision support |
| BaFin | Financial services supervision | Credit scoring AI, algorithmic trading, insurance underwriting |
| KBA | Motor vehicles and road traffic | Autonomous vehicles, ADAS, vehicle type-approval |
| State DPAs | Data protection | GDPR/AI Act intersection, biometric AI, employee monitoring |
| Lander Authorities | Product safety | Consumer AI products, general market surveillance |
Independent Market Surveillance Chamber (UKIM)
The draft KI-MIG establishes an Unabhangige Kammer fur die Marktuberwachung (UKIM)—Independent Market Surveillance Chamber—within BNetzA to oversee particularly sensitive high-risk AI systems. UKIM has exclusive oversight of AI used in:[1]
- Law enforcement — Risk assessment, evidence evaluation, crime prediction
- Migration and asylum — Application processing, document verification
- Border control — Biometric identification, risk assessment
- Justice and democratic processes — Judicial AI, election-related systems
UKIM reports annually to the Bundestag (German Parliament) on AI deployment in these sensitive areas, providing democratic oversight of government AI use.
Implementation Timeline
The EU AI Act’s staggered implementation timeline applies uniformly across all member states, including Germany. Key deadlines for German organizations:
EU AI Act Timeline for Germany
| Date | Milestone | Impact for Germany | Status |
|---|---|---|---|
| Aug 1, 2024 | Entry into Force | AI Act legally effective across EU | COMPLETE |
| Feb 2, 2025 | Prohibited AI Ban | Social scoring, manipulative AI, untargeted biometric scraping banned | ACTIVE |
| Aug 2, 2025 | GPAI Compliance | Foundation model obligations, AI Service Desk operational | ACTIVE |
| Aug 2, 2026 | High-Risk Systems | Full conformity for Annex III systems; notified bodies operational | 8 MONTHS |
| Aug 2, 2026 | AI Regulatory Sandboxes | BNetzA sandboxes must be operational | 8 MONTHS |
| Aug 2, 2027 | Medical AI Extended | Extended timeline for medical device AI safety components | 20 MONTHS |
High-Risk AI Sectors in Germany
Germany’s industrial structure means certain high-risk AI categories have outsized relevance. The following sectors face significant compliance obligations under Annex III:
Automotive and Manufacturing
Germany’s automotive industry—home to Volkswagen, BMW, Daimler, and Bosch—faces substantial AI Act implications. Most AI systems in autonomous vehicles and Advanced Driver Assistance Systems (ADAS) are classified as high-risk when used as safety components.[4][5]
Automotive AI Classification
AI systems in vehicles may be classified as high-risk under two pathways:
- 1. Annex I (Article 6(1)): AI as safety component of products requiring third-party conformity assessment (vehicle type-approval)
- 2. Annex III Category 2: AI managing critical infrastructure including road traffic
However, the Type-Approval Framework Regulation (EU 2018/858) serves as lex specialis for vehicle-related AI safety components, with AI Act requirements supplementary rather than primary. The German Association of the Automotive Industry (VDA) established the KI-Absicherung project to develop AI assurance verification methods for in-vehicle systems.[4]
Key automotive AI applications requiring compliance attention:
- Autonomous driving systems (Level 3+) — Full high-risk requirements
- ADAS features (automatic emergency braking, lane keeping) — Safety component classification
- In-cabin monitoring (driver drowsiness, emotion detection) — Transparency obligations, potential high-risk
- Predictive maintenance AI — Generally minimal risk unless affecting safety
Healthcare and Medical Devices
Germany’s substantial healthcare sector and medical device industry (including Siemens Healthineers, Fresenius, B. Braun) face high-risk AI requirements. Medical AI is addressed through both the AI Act and the Medical Device Regulation (EU 2017/745).
The BfArM (Federal Institute for Drugs and Medical Devices) retains supervisory responsibility for AI-powered medical devices. Most clinical decision support systems, diagnostic AI, and treatment recommendation engines are classified as high-risk requiring:[6]
- Notified body conformity assessment (costs: 10,000-100,000 euros)
- Clinical evaluation and performance studies
- Post-market surveillance and vigilance reporting
- Extended deadline through August 2027 for AI as medical device safety components
Financial Services
German financial institutions using AI for creditworthiness assessment, insurance underwriting, or algorithmic trading face high-risk classification under Annex III Category 5 (essential services). BaFin maintains supervisory authority, with AI Act requirements complementing existing financial regulation.
Article 12 Logging Requirements
Article 12 of the EU AI Act establishes mandatory logging requirements for high-risk AI systems—a requirement with particular implications in Germany due to intersection with data protection law and works council rights.
Core Logging Obligations
High-risk AI systems must be designed with automatic recording of events (logs) throughout operation, including:
Traceability Requirements
- - Recording of the period of each use (start date/time, end date/time)
- - Reference database against which input data was checked
- - Input data for which the search led to a match
- - Identity of natural persons involved in verifying results
Technical Requirements
- - Logging capabilities ensuring traceability throughout system lifecycle
- - Logging level appropriate to intended purpose of high-risk system
- - Protection by appropriate security measures (tamper-evidence)
- - Retention for period appropriate to intended purpose
German-Specific Considerations
Article 12 logging intersects with several German legal requirements:
- GDPR (DSGVO) compliance: Logs containing personal data must satisfy GDPR requirements including purpose limitation, storage limitation, and data subject rights. Organizations must balance AI Act logging mandates with GDPR minimization principles.
- Works council information rights: Under Section 80(2) BetrVG, works councils can request access to AI system logs to verify compliance with works agreements and employee protection provisions.
- Sector-specific retention: Financial services (MaRisk), healthcare (medical records), and automotive (product liability) have additional retention requirements that must harmonize with AI Act logging.
Sector-Specific Considerations
Employment and Works Councils (Betriebsrat)
German employers deploying AI in employment contexts face dual compliance requirements: EU AI Act obligations and national works council co-determination rights under the Works Constitution Act (Betriebsverfassungsgesetz, BetrVG).[3][7]
Employment AI is explicitly classified as high-risk under Annex III Category 4, covering:
- Recruitment and candidate screening AI
- Task allocation and work assignment systems
- Promotion and advancement decisions
- Performance monitoring and evaluation
- Termination decisions
Works Council Rights (BetrVG)
The 2021 Works Council Modernization Act (Betriebsratemodernisierungsgesetz) added AI-specific provisions to BetrVG:[3][7]
Works Council AI Rights under BetrVG
| Section | Right | Practical Implication |
|---|---|---|
| Section 80(3) | Expert consultation | Works council may engage external AI experts at employer expense |
| Section 87(1) No. 6 | Co-determination on monitoring | Veto power over AI systems capable of monitoring employee behavior/performance |
| Section 90(1) No. 3 | Information before introduction | Employer must inform works council in good time before deploying AI |
| Section 95(2a) | Personnel selection guidelines | Works council involvement in AI-based personnel selection criteria |
Critical Planning Factor
German organizations should factor 3-6 months additional timeline for works council negotiations when deploying high-risk employment AI. Works agreements (Betriebsvereinbarungen) covering AI use, data handling, and employee protections are often required before deployment. Failure to secure works council agreement can result in injunctions blocking AI system use.
Healthcare Sector
Healthcare AI in Germany must satisfy both AI Act requirements and medical device regulations. Key considerations:
- BfArM oversight: Federal Institute for Drugs and Medical Devices supervises AI medical devices
- DiGA pathway: Digital Health Applications (DiGA) on the BfArM directory face specific AI requirements
- Extended deadline: AI as medical device safety component has until August 2027
- Patient data protection: Strict German healthcare privacy requirements (beyond GDPR)
Financial Services
German financial institutions under BaFin supervision using AI for high-risk applications must address:
- Credit scoring AI: High-risk under Annex III; requires full conformity assessment
- Insurance underwriting: Pricing and risk assessment AI classified as high-risk
- MaRisk integration: BaFin’s Minimum Requirements for Risk Management apply alongside AI Act
- Model risk management: Existing SR 11-7-equivalent requirements complement AI Act obligations
Conformity Assessment Pathway
German organizations with high-risk AI systems must complete conformity assessment before August 2, 2026. Two pathways exist, depending on system classification:
Internal Control (Most High-Risk)
Self-assessment by provider based on:
- - Technical documentation (Annex IV)
- - Quality management system
- - Post-market monitoring plan
- - EU declaration of conformity
Timeline: 3-6 months | Cost: Internal resources
Notified Body Assessment
Third-party assessment required for:
- - Biometric identification systems
- - AI medical devices
- - Products under Annex I requiring third-party conformity
Timeline: 3-12 months | Cost: 10,000-100,000 EUR
German Notified Bodies
Germany’s notified bodies for AI Act conformity assessment are being designated. Organizations should engage early given limited capacity and extended assessment timelines. Key German notified bodies with relevant technical competence include TUV Sud, TUV Rheinland, DEKRA, and sector-specific bodies designated under existing EU regulations.
Enforcement and Penalties
The EU AI Act’s penalty structure applies uniformly across Germany, with BNetzA and sectoral authorities empowered to impose fines:
Penalty Structure in Germany
| Violation Type | Maximum Fine | Enforcing Authority |
|---|---|---|
| Prohibited AI practices | 35M EUR or 7% global revenue | BNetzA, UKIM (sensitive areas) |
| High-risk non-compliance | 15M EUR or 3% global revenue | BNetzA, sectoral authorities |
| GPAI obligations | 15M EUR or 3% global revenue | EU AI Office (direct enforcement) |
| Incorrect information | 7.5M EUR or 1% global revenue | BNetzA, sectoral authorities |
| Transparency violations | 7.5M EUR or 1% global revenue | BNetzA, sectoral authorities |
Enforcement Powers
German authorities have extensive investigatory powers under Article 74, including:
- Access to all conformity documentation and technical data
- Access to training, validation, and testing datasets
- Access to source code and algorithms (protected as confidential)
- Power to require corrective action or market withdrawal
Compliance Roadmap for German Organizations
German organizations should implement a phased approach accounting for both EU deadlines and national specifics including works council processes:
Germany EU AI Act Compliance Roadmap
AI System Inventory & Classification (Month 1)
Catalog all AI systems. Classify per Annex III risk categories. Identify systems requiring works council involvement (Section 87 BetrVG). Map to existing sector-specific requirements (BfArM, BaFin, KBA). Document intended purpose and affected populations.
Works Council Engagement (Month 1-4)
Inform works council per Section 90 BetrVG. Prepare for co-determination negotiations. Draft Betriebsvereinbarung (works agreement) covering AI use, data handling, and employee protections. Allow 3-6 months for negotiation and expert consultation.
Risk Management & Documentation (Month 2-5)
Implement Article 9 risk management system. Prepare Annex IV technical documentation. Integrate with existing frameworks (ISO 42001, sector requirements). Document risk mitigation measures and residual risks.
Article 12 Logging Implementation (Month 3-6)
Deploy Article 12-compliant logging infrastructure. Ensure GDPR/DSGVO compliance for logged personal data. Implement tamper-evident storage. Configure retention periods per sector requirements. Prepare for works council and regulator access requests.
Conformity Assessment (Month 4-8)
Complete internal control assessment or engage German notified body. Prepare EU declaration of conformity. Register in EU AI database per Article 71. Affix CE marking. For medical AI, coordinate with BfArM and MDR requirements.
Post-Market Monitoring & Continuous Compliance (Ongoing)
Implement Article 72 post-market monitoring. Establish Article 73 serious incident reporting to BNetzA/sectoral authorities. Conduct periodic reviews. Update documentation as systems evolve. Prepare for market surveillance inspections.
Critical insight: German organizations face tighter effective timelines due to works council requirements. A notified body assessment starting January 2026 may not complete before the August deadline. Start now.
Frequently Asked Questions
Who is the competent authority for the EU AI Act in Germany?
The Bundesnetzagentur (Federal Network Agency, BNetzA) is designated as Germany’s primary market surveillance authority. BNetzA coordinates AI Act supervision, operates an AI lab and service desk, and manages regulatory sandboxes. Sectoral authorities like BfArM (medical devices), BaFin (financial services), and KBA (vehicles) retain responsibility in their domains. An Independent Market Surveillance Chamber (UKIM) oversees sensitive high-risk systems in law enforcement, migration, and justice.
What is the KI-Verordnung and when does it take effect?
KI-Verordnung is the German term for the EU AI Act (Regulation 2024/1689). Germany is implementing it through national legislation called the KI-Marktuberwachungsgesetz und Innovationsforderungsgesetz (KI-MIG). The EU AI Act is directly applicable, with prohibited practices banned since February 2025, GPAI requirements effective August 2025, and high-risk system compliance required by August 2026.
Do German works councils have rights regarding AI systems?
Yes, extensive rights. Under the Works Constitution Act (BetrVG), employers must inform works councils before introducing AI (Section 90), works councils can consult external AI experts (Section 80), they have co-determination rights over systems that could monitor employees (Section 87), and must be involved in AI-based personnel selection guidelines (Section 95). These rights apply in addition to EU AI Act deployer obligations and typically require negotiated works agreements before AI deployment.
How does the EU AI Act affect German automotive companies?
Most AI in autonomous vehicles and ADAS is classified as high-risk when used as safety components. However, vehicle-related AI safety components are primarily regulated through the Type-Approval Framework Regulation (EU 2018/858), with AI Act requirements supplementary. German automakers must complete conformity assessments by August 2026. The VDA’s KI-Absicherung project develops AI assurance verification methods specific to in-vehicle systems.
What are the penalties for EU AI Act violations in Germany?
Penalties mirror EU maximums: up to 35 million euros or 7% of global annual turnover for prohibited AI practices, up to 15 million euros or 3% for high-risk system non-compliance, and up to 7.5 million euros or 1% for providing incorrect information. BNetzA and sectoral authorities enforce penalties, with UKIM overseeing sensitive high-risk systems.
What is Article 12 logging and why does it matter in Germany?
Article 12 requires high-risk AI systems to automatically log events throughout operation, ensuring traceability of inputs, outputs, and decisions. In Germany, this intersects with GDPR data protection requirements, works council information rights (Section 80 BetrVG), and sector-specific retention rules. Logs must be tamper-evident, retained appropriately, and available to supervisory authorities. GLACIS provides Article 12-compliant logging infrastructure with automatic retention management.
Are there AI regulatory sandboxes in Germany?
Yes. Article 57 requires member states to establish at least one AI regulatory sandbox by August 2026. Germany’s draft KI-MIG provides for sandboxes operated by BNetzA. These controlled environments allow organizations to develop and test AI systems under regulatory supervision, receiving compliance guidance and reducing uncertainty before full market launch.
References
- [1] Technology’s Legal Edge. "State of the Act: EU AI Act implementation in key Member States." November 2025. technologyslegaledge.com
- [2] Pinsent Masons. "AI Act: Germany consults on implementation law." 2025. pinsentmasons.com
- [3] Hogan Lovells. "AI in German Employment - Navigating the AI Act, GDPR, and National Legislation." 2024. hoganlovells.com
- [4] VDA - German Association of the Automotive Industry. "Position: AI Act." 2023. vda.de
- [5] Taylor Wessing. "AI Act and the Automotive Industry - Where does the road lead?" March 2025. taylorwessing.com
- [6] European Union. "Regulation (EU) 2024/1689 of the European Parliament and of the Council." Official Journal of the European Union, July 12, 2024. EUR-Lex 32024R1689
- [7] Bird & Bird. "First Judgement on the Rights of Works Councils when Employees use AI Systems." 2024. twobirds.com
- [8] White & Case. "AI Watch: Global regulatory tracker - Germany." 2025. whitecase.com
- [9] Chambers and Partners. "Artificial Intelligence 2025 - Germany." Practice Guide, 2025. chambers.com
- [10] DLA Piper. "The German government provides information on its plans for AI and employee protection." 2024. dlapiper.com