Ambient AI Scribe Privacy Read Now
Role-Specific Guide • General Counsel

EU AI Act for General Counsel

Legal compliance guidance for liability assessment, contractual obligations, regulatory defense, and evidence requirements under AI regulations.

14 min read 2,200+ words
Joe Braidwood
Joe Braidwood
CEO, GLACIS
14 min read

Executive Summary

The EU AI Act creates new legal exposure that demands General Counsel attention. Unlike GDPR—which primarily addressed data handling—AI regulation imposes obligations on system behavior, decision-making processes, and organizational governance. Penalties reach €35 million or 7% of global turnover, with enforcement beginning in phases from February 2025 through August 2027.

For General Counsel, this isn’t merely a compliance exercise. The regulation reshapes liability allocation between AI providers and deployers, requires contractual updates across vendor and customer relationships, and establishes evidence standards that will define regulatory defense. The forthcoming EU AI Liability Directive will shift burden of proof to defendants in many AI harm cases.

Key finding: Organizations that treat AI compliance as a documentation exercise will find themselves exposed. Regulators and courts will demand contemporaneous evidence that controls actually executed—not policies that existed on paper. General Counsel must ensure their organizations can produce defensible proof, not just compliance artifacts.

€35M
Maximum Penalty
7%
Global Turnover Cap
15 Days
Incident Reporting
Aug 2026
High-Risk Deadline

In This Guide

Why the EU AI Act Creates New Legal Exposure

The EU AI Act represents a fundamental shift in how organizations must approach AI deployment. For General Counsel, three aspects create particularly significant legal exposure:

Behavioral Obligations, Not Just Data Rules

GDPR focused on how organizations handle data. The AI Act focuses on how AI systems behave and make decisions. This means legal liability now extends to algorithmic outputs, model accuracy, bias in automated decisions, and the effectiveness of human oversight mechanisms. These are areas where legal teams historically had limited visibility.

Expanded Definition of "Provider"

Under Article 3, organizations that substantially modify AI systems or put their name on AI products may become "providers"—assuming full compliance obligations including conformity assessment. A company that integrates a third-party AI model into a high-risk use case (employment screening, credit decisions) may inherit provider-level liability regardless of who built the underlying model.

The AI Liability Directive

The forthcoming EU AI Liability Directive will create a rebuttable presumption of causation when AI systems cause harm and the defendant cannot demonstrate compliance. This effectively shifts the burden of proof to defendants. Organizations unable to produce evidence of proper risk management, testing, and oversight will face significant disadvantages in litigation.

Key GC Responsibilities Under the EU AI Act

Liability Assessment and Risk Allocation

General Counsel must map AI systems across the organization and classify them according to the Act’s risk taxonomy. For each high-risk system, liability must be clearly allocated between internal teams, vendors, and partners. Key questions include:

Contractual Obligations

Vendor agreements require immediate review. Contracts with AI providers must address:

Customer terms must be updated to include appropriate AI disclosures, particularly for systems requiring transparency under Article 50 (chatbots, emotion recognition, deepfakes).

Regulatory Engagement Strategy

The AI Act establishes national competent authorities in each member state, coordinated by the EU AI Office. General Counsel should develop relationships with relevant authorities before enforcement actions arise. Consider:

Evidence Preservation Requirements

Article 12 requires high-risk AI systems to generate logs that enable monitoring and investigation. General Counsel must ensure:

Documentation and Disclosure Obligations

Article 11 mandates comprehensive technical documentation before high-risk systems enter the market. Article 13 requires transparency for users. General Counsel oversight ensures documentation is legally sound and disclosures don’t create unintended liability exposure.

Questions GCs Should Be Asking Their Organizations

AI Inventory and Classification

"Do we have a complete inventory of AI systems, and has each been classified under the EU AI Act risk categories? Who made those classification decisions, and is the rationale documented?"

Vendor Compliance

"For AI systems we procure, have we verified our vendors’ conformity status? Do our contracts clearly allocate EU AI Act obligations, and do we have audit rights?"

Evidence Generation

"If a regulator requested evidence that our risk management system operates effectively, what would we produce? Is that evidence timestamped and tamper-evident, or would we be reconstructing from scattered logs?"

Human Oversight

"Can we demonstrate that humans actually review and can override AI decisions? Is there an audit trail of human interventions, or just a policy saying oversight exists?"

Incident Response

"Do we have a protocol for AI-related incidents that meets the 15-day serious incident reporting requirement? Has legal been involved in defining what constitutes a reportable incident?"

Board Awareness

"Has the board been briefed on AI-related legal exposure? Are AI risks included in enterprise risk management, and is the board receiving regular updates?"

Red Flags Indicating Legal/Compliance Gaps

No AI System Inventory

If the organization cannot produce a comprehensive list of AI systems in use, classification and compliance are impossible.

"We’re Just Using Vendor Tools"

Belief that vendor-provided AI absolves organizational liability. Deployers have independent obligations; integration into high-risk use cases may trigger provider-level duties.

Documentation Exists Only as Policies

Policies describing what should happen without evidence of what actually happens. Regulators will demand operational proof.

No AI-Specific Contract Language

Vendor and customer contracts that don’t address AI-specific obligations, liability allocation, or compliance representations.

Human Oversight is Theoretical

Claims of human-in-the-loop processes without audit trails showing humans actually review decisions or documentation of override capabilities.

IT Owns AI Governance Alone

AI governance treated as a technical function without legal, compliance, and business unit involvement. This siloed approach misses liability implications.

Personal Liability Considerations

While the EU AI Act primarily imposes organizational penalties, General Counsel should be aware of pathways to personal liability:

Member State Implementation

Individual member states may implement the AI Act in ways that create personal liability for directors or officers. Monitor transposition legislation in key jurisdictions where your organization operates.

Civil Litigation

When AI systems cause harm, affected parties may pursue civil claims against executives for negligent oversight. The AI Liability Directive’s burden-shifting will make such claims easier to sustain.

Fiduciary Duties

Directors have fiduciary obligations that include overseeing material risks. AI presents board-level risks; failure to ensure adequate governance may breach fiduciary duties.

Regulatory Action Against Individuals

In egregious cases—particularly involving prohibited AI practices or willful non-compliance—regulators may pursue action against responsible individuals, especially where they can demonstrate knowledge of violations.

Affirmative Defense Requirements (Colorado AI Act Intersection)

The Colorado AI Act (SB 21-169), effective February 2026, provides a notable affirmative defense framework relevant to US organizations also subject to EU AI Act obligations.

The Colorado "Cure" Defense

Colorado provides developers and deployers an affirmative defense if they:

  1. Discover the violation through reasonable monitoring
  2. Cure the violation within a reasonable timeframe
  3. Notify affected consumers where required
  4. Document the discovery and remediation

This defense is only available to organizations with functioning compliance programs. Continuous monitoring that generates contemporaneous evidence is essential—you cannot invoke a "cure" defense if you lack systems to discover violations in the first place.

Implications for EU AI Act Compliance

Organizations operating under both regimes should align their compliance infrastructure. The EU AI Act’s logging requirements (Article 12) and post-market monitoring obligations (Article 72) create the operational foundation needed to invoke Colorado’s affirmative defense.

Evidence Standards for Regulatory Defense

When regulators investigate or litigation arises, evidence quality determines outcomes. General Counsel must understand what constitutes defensible evidence under AI regulations:

Contemporaneous Documentation

Evidence generated in real-time carries far more weight than after-the-fact reconstruction. Timestamped logs showing controls executed at specific moments defeat arguments that compliance was merely aspirational.

Tamper-Evident Records

Regulators are sophisticated enough to question whether logs have been modified. Cryptographic attestation—evidence that hasn’t been and cannot be altered—provides the strongest foundation for defense.

Mapping to Regulatory Requirements

Evidence must clearly correspond to specific regulatory obligations. General documentation about "AI governance" is less valuable than evidence specifically demonstrating Article 9 risk management, Article 10 data governance, or Article 14 human oversight.

The "Proof Gap" Problem

Most organizations suffer from a "proof gap"—the difference between controls that exist on paper and verifiable evidence that controls actually operate. Closing this gap is essential for regulatory defense. Policy documents prove intent; operational evidence proves execution.

Working with Other Stakeholders

Chief Information Security Officer (CISO)

Coordinate on: logging infrastructure, data security for AI systems, cybersecurity requirements under Article 15, incident detection and response, vulnerability management for AI-specific threats.

Chief Compliance Officer (CCO)

Coordinate on: compliance program design, regulatory mapping, training and awareness, audit schedules, remediation tracking, policy development.

Business Unit Leaders

Coordinate on: AI use case identification, risk classification input, operational implementation of controls, human oversight execution, incident escalation protocols.

Data Protection Officer (DPO)

Coordinate on: GDPR/AI Act intersection, data governance under Article 10, privacy impact assessments, cross-border data considerations, subject access requests involving AI.

Board Reporting on AI Risk

General Counsel should ensure the board receives regular, substantive reporting on AI-related legal exposure:

Recommended Board Reporting Elements

  • AI System Inventory: Number and classification of AI systems, changes since last report
  • Compliance Status: Progress against regulatory deadlines, gap analysis, remediation timelines
  • Incident Summary: AI-related incidents, near-misses, regulatory inquiries
  • Regulatory Developments: New guidance, enforcement actions in the industry, legislative updates
  • Risk Quantification: Estimated exposure, insurance coverage, liability reserves
  • Resource Needs: Budget, personnel, and technology requirements for compliance

Litigation Readiness Checklist

AI Litigation Readiness

How GLACIS Provides Defensible Evidence

GLACIS addresses the core challenge General Counsel face: producing evidence that AI controls actually operate, not just documentation that they should.

Cryptographic Attestation

GLACIS generates tamper-evident proof that controls executed at specific moments. Attestations cannot be backdated or modified—providing the evidentiary foundation regulators and courts require.

Regulatory Mapping

Evidence is automatically mapped to EU AI Act articles, NIST AI RMF functions, and ISO 42001 controls—enabling instant demonstration of compliance against specific requirements.

Continuous Monitoring

Rather than point-in-time audits, GLACIS provides ongoing verification that controls remain effective—essential for Colorado’s affirmative defense and EU post-market monitoring.

Audit-Ready Reports

Board reports, regulatory submissions, and litigation support packages generated on demand—reducing the burden on legal teams during high-pressure situations.

Frequently Asked Questions

What are the key dates General Counsel should track?

February 2, 2025: Prohibited AI practices banned. August 2, 2025: GPAI model obligations apply. August 2, 2026: High-risk AI system requirements in full effect. August 2, 2027: Extended deadline for certain medical AI devices. The Colorado AI Act takes effect February 1, 2026.

How should we handle AI systems from US-based vendors?

Vendor location doesn’t determine compliance obligations—deployment location and affected individuals do. If you deploy a US vendor’s AI system in the EU or it affects EU residents, the EU AI Act applies. Your vendor contracts must address EU-specific obligations, and you should verify vendors can support your compliance needs (documentation, conformity evidence, incident notification).

What’s the relationship between GDPR and AI Act enforcement?

The regulations are complementary and enforced by overlapping (but not identical) authorities. AI systems processing personal data must comply with both. National competent authorities for the AI Act will coordinate with data protection authorities. Non-compliance can trigger penalties under both frameworks—potentially doubling exposure for a single system.

Should we engage with regulatory sandboxes?

Regulatory sandboxes (Article 57-62) offer valuable benefits: regulatory guidance during development, potential for modified obligations, and relationship-building with authorities. For organizations developing novel AI applications, sandbox participation can reduce compliance uncertainty. However, sandbox benefits don’t exempt you from core obligations—and sandbox interactions create records that may be discoverable.

How do we handle existing AI systems that may not comply?

Conduct an immediate gap analysis. For systems that cannot achieve compliance by applicable deadlines, options include: (1) modification to meet requirements, (2) re-classification to a lower risk category if legitimately appropriate, (3) geographic restriction to exclude EU markets, or (4) decommissioning. Document the analysis and decision rationale—regulators will scrutinize "re-classification" decisions carefully.

What privilege considerations apply to AI compliance work?

Structure AI audits and assessments carefully to preserve privilege where appropriate. Legal-directed compliance assessments may qualify for attorney-client privilege or work product protection. However, operational compliance documentation (logs, attestations, routine monitoring) generally won’t be privileged. Consult with outside counsel on privilege strategies before commencing major AI compliance initiatives.

References

  1. European Union. "Regulation (EU) 2024/1689 of the European Parliament and of the Council." Official Journal of the European Union, July 12, 2024. EUR-Lex 32024R1689
  2. European Commission. "Proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive)." September 28, 2022. europa.eu
  3. Colorado General Assembly. "SB21-169: Artificial Intelligence Transparency." 2024. leg.colorado.gov
  4. European Commission. "Questions and Answers: Artificial Intelligence Act." March 13, 2024. europa.eu
  5. ISO/IEC. "ISO/IEC 42001:2023 Information Technology — Artificial Intelligence — Management System." December 2023. iso.org

Close the Proof Gap Before Regulators Ask

GLACIS generates the cryptographic evidence General Counsel need—proof that AI controls actually execute, mapped to EU AI Act articles and ready for regulatory defense.

Get Your Free AI Assessment

Related Guides