Ambient AI Scribe Privacy Read Now
CISO Guide • December 2025

EU AI Act for CISOs

Security requirements, technical controls, logging infrastructure, and personal liability considerations for Chief Information Security Officers.

12 min read 2,400+ words
Joe Braidwood
Joe Braidwood
CEO, GLACIS
12 min read

Executive Summary

The EU AI Act creates significant new responsibilities for CISOs. Unlike traditional cybersecurity regulations that focus on data protection, the AI Act requires evidence-based proof that AI controls actually execute—not just that policies exist. Articles 9, 12, 14, and 15 mandate technical control implementation, comprehensive logging infrastructure, human oversight mechanisms, and AI-specific cybersecurity protections.

For high-risk AI systems, the August 2, 2026 compliance deadline is now eight months away. CISOs must establish logging and audit trail infrastructure per Article 12, implement security testing programs including adversarial testing, and build incident response capabilities that meet the 15-day serious incident reporting requirement under Article 73.

Key CISO insight: The AI Act shifts the burden of proof. Regulators can demand access to logs, source code, and training data under Article 74. Organizations that can’t demonstrate control effectiveness—through cryptographic evidence, not just documentation—face penalties up to €15 million or 3% of global revenue for high-risk system non-compliance.

Aug 2026
High-Risk Deadline
15 Days
Incident Reporting
Article 12
Logging Requirements
€15M
High-Risk Penalty

In This Guide

Why the EU AI Act Matters to CISOs

The EU AI Act isn’t just another compliance checkbox for your legal team. It fundamentally changes how organizations must approach AI security—and places CISOs at the center of compliance.

Unlike GDPR, which focuses on data protection policies and procedures, the AI Act demands demonstrable technical controls. Article 15 explicitly requires "appropriate levels of accuracy, robustness and cybersecurity" for high-risk AI systems, including resilience against AI-specific attacks like data poisoning and model evasion. This is security engineering, not policy writing.

The Shift from Documentation to Evidence

Traditional compliance follows a familiar pattern: write policies, conduct annual audits, produce documentation. The AI Act breaks this model. Article 74 gives market surveillance authorities power to demand:

This means CISOs need infrastructure that produces evidence on demand—tamper-evident logs, cryptographic attestations, and audit trails that prove controls actually work.

Key CISO Responsibilities Under the EU AI Act

The regulation assigns several technical domains squarely within CISO purview. Understanding these responsibilities is essential for resource planning and stakeholder communication.

1. Technical Control Implementation (Articles 9, 14, 15)

Article 9: Risk Management System

CISOs must implement continuous, iterative risk management including:

  • Identification and analysis of known and foreseeable security risks
  • Estimation and evaluation of risks that may emerge during deployment
  • Evaluation of risks based on post-market monitoring data
  • Adoption and documentation of suitable risk mitigation measures

Article 14: Human Oversight

Security implications of human oversight requirements:

  • Access controls ensuring authorized personnel can override AI decisions
  • Audit trails of human interventions and override decisions
  • Authentication mechanisms for human oversight functions
  • Secure channels for escalation and intervention

Article 15: Cybersecurity Requirements

AI-specific security controls beyond traditional IT security:

  • Data poisoning protection: Integrity verification for training data pipelines
  • Model evasion defense: Robustness testing against adversarial inputs
  • Model extraction prevention: API rate limiting and query monitoring
  • Model weight protection: Encryption and access controls for model files

2. Logging and Audit Trail Infrastructure (Article 12)

Article 12 establishes requirements that directly impact CISO infrastructure decisions. This is perhaps the most operationally demanding requirement for security teams.

Article 12 Requirements: What CISOs Must Implement

  • Automatic logging capabilities ensuring traceability throughout the AI system lifecycle
  • Logging level appropriate to intended purpose—more critical systems require more granular logging
  • Records including: input data period, reference database, persons involved in verification
  • Security measures: Logs must be protected against tampering and unauthorized access
  • Retention periods: Appropriate to the system’s purpose, typically minimum 6 months for biometric systems

3. Security Testing Requirements

The AI Act mandates testing that goes beyond traditional penetration testing. CISOs must establish programs covering:

4. Incident Response Obligations

Article 73 creates new incident response requirements specific to AI systems. CISOs must integrate these with existing security incident management:

Serious Incident Reporting (15-Day Deadline)

A "serious incident" under Article 73 includes any incident leading to:

  • Death or serious damage to health
  • Serious and irreversible disruption of critical infrastructure
  • Serious disruption of fundamental rights
  • Serious damage to property or the environment

CISOs must establish classification criteria and reporting procedures before the August 2026 deadline.

Questions CISOs Should Be Asking

Before regulators ask these questions, CISOs should be asking them internally. Use this framework to assess your organization’s AI compliance readiness:

AI System Inventory

  • ? Do we have a complete inventory of all AI systems in production?
  • ? Which systems fall under high-risk classification per Annex III?
  • ? Are there shadow AI deployments outside IT governance?

Logging & Evidence

  • ? Can we produce logs demonstrating AI system behavior on demand?
  • ? Are our logs tamper-evident and protected against unauthorized access?
  • ? Do we log human oversight interventions and override decisions?

Security Controls

  • ? Have we tested our AI systems against adversarial attacks?
  • ? Do we have controls preventing data poisoning in training pipelines?
  • ? Are model weights and training data protected with appropriate access controls?

Incident Response

  • ? Do we have AI-specific incident classification criteria?
  • ? Can we meet the 15-day serious incident reporting deadline?
  • ? Have we established communication channels with national competent authorities?

Red Flags Indicating Compliance Gaps

These warning signs suggest your organization may not be ready for the August 2026 deadline:

No AI system inventory exists

If you don’t know what AI systems you have, you can’t classify them or implement controls. Shadow AI is a critical blind spot.

Logging is application-level only

Article 12 requires AI-specific logging including inputs, outputs, and decision traces—not just HTTP request logs.

Security testing excludes AI-specific threats

Penetration tests that don’t cover adversarial ML, data poisoning, or model extraction leave critical gaps.

AI governance is Legal’s responsibility alone

The AI Act requires technical controls that Legal can’t implement. Without CISO involvement, compliance is policy-only.

No budget allocated for AI compliance infrastructure

Article 12 logging infrastructure and Article 15 security controls require investment. Unfunded mandates don’t get implemented.

Personal Liability Considerations for CISOs

While the EU AI Act primarily targets organizations with fines up to €35 million or 7% of global revenue, CISOs face personal liability exposure through several mechanisms:

Director and Officer Liability

In many EU member states, executives can be held personally liable for regulatory failures where they failed to implement adequate controls or ignored known risks. The AI Act’s explicit technical requirements (Articles 9, 12, 14, 15) create a clear standard of care.

Criminal Liability

Some member states may implement criminal penalties for gross negligence in AI system oversight, particularly where serious incidents cause death or serious harm. CISOs should understand their jurisdiction’s implementation of the AI Act.

Professional Negligence

Failure to implement reasonable security controls for AI systems could expose CISOs to professional negligence claims, particularly if they were aware of risks and failed to act.

CISO Liability Mitigation Strategies

  • Document recommendations: Create written records of security recommendations, especially when budget or timeline constraints prevent implementation
  • Ensure board reporting: Regular reports on AI risk posture and compliance status create evidence of executive awareness
  • Review D&O insurance: Confirm coverage includes AI-related regulatory penalties and doesn’t exclude "regulatory compliance failures"
  • Establish governance structure: Formal AI governance committee with documented decision authority

Working with Other Stakeholders

EU AI Act compliance requires coordination across multiple functions. CISOs must establish effective working relationships with:

General Counsel (GC)

  • AI system classification and risk determination
  • Contract requirements for AI vendors
  • Incident reporting protocols and legal privilege
  • Authority information request responses

Chief Compliance Officer (CCO)

  • Quality management system integration
  • Conformity assessment preparation
  • Post-market monitoring coordination
  • Regulatory relationship management

Chief Technology Officer (CTO)

  • Technical documentation requirements
  • Logging infrastructure implementation
  • AI system architecture and security design
  • Adversarial testing program development

Board of Directors

  • AI risk appetite and tolerance definitions
  • Compliance investment authorization
  • Quarterly compliance status reporting
  • Material risk escalation decisions

Board and Executive Reporting Requirements

CISOs should establish regular AI compliance reporting to the board. Recommended metrics and reporting elements:

Quarterly Board Report: AI Compliance Status

1. AI System Inventory Status

Total systems, classification by risk level, new systems added, systems retired

2. High-Risk System Compliance Progress

Percentage meeting Article 9-15 requirements, gap closure timeline, conformity assessment status

3. Technical Control Metrics

Logging coverage percentage, security testing completion, human oversight audit results

4. Incident Summary

AI-related incidents, near-misses, serious incident reports filed (if any)

5. Regulatory Engagement

Authority requests received, inspections, guidance documents reviewed

6. Material Risks and Recommendations

Identified compliance gaps, resource requirements, timeline risks

Implementation Checklist for CISOs

Use this checklist to track your organization’s progress toward EU AI Act compliance:

CISO Compliance Checklist

EU AI Act Technical Requirements

Phase 1: Discovery (Month 1-2)

  • Complete AI system inventory across all business units
  • Classify systems per Annex III high-risk categories
  • Identify shadow AI and unsanctioned deployments
  • Assess current logging capabilities against Article 12
  • Document existing security controls for AI systems

Phase 2: Infrastructure (Month 2-5)

  • Implement Article 12-compliant logging infrastructure
  • Deploy tamper-evident log storage and retention
  • Establish human oversight audit trail mechanisms
  • Implement AI-specific security controls per Article 15
  • Deploy training data integrity verification

Phase 3: Testing & Validation (Month 4-7)

  • Establish adversarial testing program
  • Conduct robustness testing for high-risk systems
  • Validate logging completeness and accuracy
  • Test incident response procedures
  • Document testing results per Annex IV

Phase 4: Governance (Month 5-8)

  • Establish AI incident classification criteria
  • Create serious incident reporting procedures
  • Implement board reporting framework
  • Establish authority communication channels
  • Document CISO recommendations and board responses

Timeline note: This 8-month timeline assumes dedicated resources and parallel workstreams. Organizations starting after April 2026 face significant deadline risk.

How GLACIS Helps CISOs Meet Technical Requirements

GLACIS provides the evidence infrastructure CISOs need to demonstrate EU AI Act compliance:

Article 12 Logging Infrastructure

Tamper-evident logging that captures AI system inputs, outputs, and decision traces. Cryptographic verification ensures logs haven’t been modified—meeting the "appropriate security measures" requirement.

Continuous Control Attestation

Automated verification that security controls execute correctly—not just that policies exist. Generate cryptographic evidence on demand for regulators, auditors, and enterprise customers.

Board-Ready Compliance Reporting

Pre-built dashboards and reports mapped to EU AI Act articles. Demonstrate compliance progress to executives and board with metrics that matter.

Evidence Pack Sprint

Generate audit-ready compliance evidence in days, not months. Includes technical documentation, control attestations, and risk assessment artifacts mapped to Articles 9-15.

Frequently Asked Questions

What are the CISO’s specific responsibilities under the EU AI Act?

CISOs are responsible for implementing technical controls under Articles 9, 14, and 15 (risk management, human oversight, cybersecurity), establishing logging and audit trail infrastructure per Article 12, conducting security testing including adversarial testing for AI systems, managing incident response and reporting obligations under Article 73, and providing evidence of control effectiveness to regulators and auditors.

What logging requirements does Article 12 impose?

Article 12 requires high-risk AI systems to have automatic logging capabilities that ensure traceability throughout the system lifecycle. Logs must record the period of use, reference databases, input data, and persons involved in verification. Logs must be retained for periods appropriate to the system’s purpose, protected by appropriate security measures, and be tamper-evident. Standard application logs are insufficient—AI-specific decision traces are required.

Can CISOs face personal liability under the EU AI Act?

While the EU AI Act primarily targets organizations with fines up to €35 million or 7% of global revenue, personal liability can arise through director and officer liability under national laws, criminal liability in certain member states for gross negligence, professional negligence claims, and D&O insurance exclusions for regulatory non-compliance. CISOs should document their recommendations and ensure adequate board reporting to mitigate personal exposure.

How does the EU AI Act define cybersecurity requirements?

Article 15 requires high-risk AI systems to achieve appropriate levels of cybersecurity, including resilience against attempts to alter use, behavior, or performance through exploitation of vulnerabilities. This specifically includes technical solutions to address AI-specific vulnerabilities such as data poisoning, model evasion, adversarial attacks, and model extraction. Systems must also protect against unauthorized access to training data and model weights.

What is the serious incident reporting deadline?

Article 73 requires reporting serious incidents to national competent authorities within 15 days. A serious incident is any incident leading to death, serious health damage, serious disruption of fundamental rights, or serious property or environmental damage. CISOs must establish incident classification procedures before the August 2026 deadline and maintain communication channels with authorities.

How should CISOs coordinate with General Counsel?

CISOs should work with General Counsel on AI system classification and risk determination, contract requirements for AI vendors and deployers, incident reporting protocols and legal privilege considerations, documentation standards for regulatory defensibility, and coordinated responses to authority information requests under Article 74. Establish regular touchpoints and joint governance structures for effective collaboration.

EU AI Act Compliance Evidence in Days

GLACIS generates cryptographic evidence that your AI controls execute correctly—mapped to EU AI Act Articles 9-15, ready for regulators and auditors. Get ahead of the August 2026 deadline.

Start Your Free Assessment

Related Guides