Ambient AI Scribe Privacy Read Now
Framework Crosswalk

EU AI Act vs HIPAA

The definitive crosswalk for healthcare AI companies operating in both US and EU markets. Compare requirements, map controls, and build a unified compliance strategy.

14 min read 3,200+ words
Joe Braidwood
Joe Braidwood
CEO, GLACIS
14 min read

Executive Summary

Healthcare AI companies expanding from the US to Europe—or vice versa—face a complex regulatory landscape. HIPAA (the Health Insurance Portability and Accountability Act) governs the privacy and security of Protected Health Information in the United States. The EU AI Act (Regulation 2024/1689) governs AI system safety, transparency, and fundamental rights across all 27 EU member states.

These frameworks are complementary, not substitutive. HIPAA compliance does not satisfy EU AI Act requirements; EU AI Act compliance does not satisfy HIPAA requirements. Organizations operating in both jurisdictions must comply with both—and the good news is that significant synergies exist when building a unified governance framework.

Key finding: Organizations that approach dual-jurisdiction compliance strategically can leverage overlapping requirements for documentation, risk management, and security controls while ensuring they address the unique requirements of each framework. This crosswalk provides the detailed mapping needed to build that strategy.

Why This Crosswalk Matters Now: Sharp HealthCare (2025)

In November 2025, a proposed class action was filed against Sharp HealthCare. The lawsuit alleges their ambient AI scribe recorded an estimated 100,000+ patients without proper consent and that false consent statements appeared in medical records—citing violations of California privacy law and HIPAA-adjacent consent requirements.[1]

Organizations operating healthcare AI in both US and EU markets face compounding liability. HIPAA compliance alone won't satisfy EU AI Act requirements—and neither framework alone provides the evidence you'll need when plaintiffs' attorneys come asking.

HIPAA
PHI Protection
EU AI Act
AI Safety
1996
HIPAA Enacted
2024
AI Act Enacted

In This Crosswalk

The Relationship Between HIPAA and EU AI Act

Understanding how HIPAA and the EU AI Act relate requires recognizing their fundamentally different origins and objectives. These frameworks emerged from different regulatory traditions to address different risks—yet both apply to healthcare AI systems operating across the Atlantic.

HIPAA: Protecting Health Information

HIPAA, enacted in 1996 and substantially updated through the HITECH Act (2009), focuses on protecting the privacy and security of individually identifiable health information. Its core concern is preventing unauthorized access, use, and disclosure of Protected Health Information (PHI). HIPAA applies to “covered entities” (healthcare providers, health plans, clearinghouses) and their “business associates” who handle PHI on their behalf.

For AI systems, HIPAA asks: Is PHI adequately protected from unauthorized access, modification, or disclosure?

EU AI Act: Ensuring AI Safety and Rights

The EU AI Act, enacted in 2024, focuses on ensuring AI systems are safe, respect fundamental rights, and operate transparently. Its core concern is preventing AI systems from causing harm to health, safety, or fundamental rights. The AI Act applies to providers and deployers of AI systems based on the risk level of the AI application, regardless of what data the AI processes.

For AI systems, the EU AI Act asks: Is this AI system designed and operated to prevent harm to individuals and society?

Complementary, Not Overlapping

These frameworks operate in different dimensions:

A healthcare AI system processing PHI for EU patients must comply with both frameworks. HIPAA governs how the system handles patient data; the EU AI Act governs how the AI system behaves and is governed. Neither substitutes for the other.

Framework Comparison Overview

HIPAA vs EU AI Act: Side-by-Side Comparison

Dimension HIPAA EU AI Act
Primary Focus Privacy and security of health information Safety, transparency, and rights of AI systems
Jurisdiction United States European Union (27 member states)
Enacted 1996 (updated 2009, 2013) 2024 (phased enforcement 2025-2027)
Scope Trigger Handling of Protected Health Information (PHI) Provision or deployment of AI systems in EU
Risk Classification No formal tiers; all PHI must be protected Prohibited, High-Risk, Limited Risk, Minimal Risk
Enforcement Body HHS Office for Civil Rights (OCR) National competent authorities + EU AI Office
Maximum Penalty $2.1M/year per violation category €35M or 7% global turnover
Conformity Assessment Self-attestation (no third-party certification) Internal control or notified body (depends on category)
Documentation Retention 6 years from creation/last effective date 10 years after AI system placed on market
Breach Notification 60 days to individuals; annual to HHS 15 days for serious incidents (Article 73)

Key Differences

While both frameworks aim to protect individuals, their approaches differ substantially in several critical areas.

1. Focus: Data vs. System

HIPAA: Data-Centric

HIPAA’s requirements center on PHI—the 18 identifiers that make health information individually identifiable. The same AI system processing de-identified data faces no HIPAA requirements, while processing PHI triggers full compliance obligations. The technology is agnostic; the data determines applicability.

EU AI Act: System-Centric

The EU AI Act’s requirements center on the AI system itself—its intended purpose, deployment context, and potential for harm. An AI diagnostic tool is high-risk regardless of whether it processes personal data, anonymous data, or synthetic data. The system’s function determines applicability.

2. Sector Specificity vs. Technology Specificity

HIPAA is sector-specific: It applies only within healthcare contexts. An AI system doing sentiment analysis on customer service calls faces no HIPAA requirements—unless those calls involve healthcare providers and patient health information.

The EU AI Act is technology-specific: It applies whenever AI technology is used, across all sectors. The same AI diagnostic tool faces high-risk requirements whether deployed by a hospital, an insurance company, or a pharmaceutical research lab.

3. Enforcement Mechanisms

Enforcement Comparison

HIPAA

Enforced by HHS OCR through complaint investigations and compliance audits. Enforcement has historically focused on breach response and egregious violations. Civil monetary penalties are tiered by culpability level (unknowing, reasonable cause, willful neglect). State attorneys general can also enforce.

EU AI Act

Enforced by national market surveillance authorities in each EU member state, with coordination by the EU AI Office. Enforcement includes product market access (CE marking required), operational restrictions, and administrative fines. The AI Office oversees General Purpose AI model compliance directly.

4. Extraterritorial Application

HIPAA applies to covered entities and business associates in the United States, plus foreign entities that handle PHI on behalf of US covered entities through BAAs. The trigger is the business relationship with US healthcare entities.

The EU AI Act has explicit extraterritorial reach similar to GDPR. It applies to any provider placing AI systems on the EU market or putting them into service in the EU, regardless of where the provider is located. It also applies when AI output is used within the EU, even if both provider and deployer are outside the EU.

Detailed Control Mapping

Despite their different focuses, HIPAA and the EU AI Act share some underlying control requirements. Organizations can leverage these overlaps to build efficient, unified governance.

Privacy and Data Governance

Privacy Controls Mapping

Control Area HIPAA Requirement EU AI Act Requirement Synergy
Data Minimization Minimum Necessary (45 CFR 164.502(b)) Article 10(3) – training data limited to what is necessary High
Data Quality 45 CFR 164.530(c) – reasonable accuracy Article 10(2) – training data must be relevant, representative, free of errors High
Data Governance Policies and procedures for PHI handling Article 10 – comprehensive data governance for training, validation, testing Medium
Individual Rights Access, amendment, accounting of disclosures Article 86 – right to explanation for high-risk AI decisions Medium
Consent/Authorization Authorization required for non-permitted uses Transparency required; consent handled under GDPR Low

Security Controls

Security Controls Mapping

Control Area HIPAA Security Rule EU AI Act Technical Requirements Synergy
Access Controls 164.312(a)(1) – unique user IDs, emergency access Article 9(4)(b) – access controls for authorized personnel High
Audit Logging 164.312(b) – activity logging for PHI access Article 12 – automatic logging of system operation Medium
Integrity Controls 164.312(c)(1) – protect ePHI from alteration Article 15 – accuracy, robustness, cybersecurity High
Transmission Security 164.312(e)(1) – encryption in transit Article 15(4) – cybersecurity appropriate to risks High
Risk Assessment 164.308(a)(1)(ii)(A) – security risk analysis Article 9 – risk management system Medium

Documentation Requirements

Documentation Requirements Mapping

Document Type HIPAA Requirement EU AI Act Requirement
Policies & Procedures Required for all safeguards (164.530(i)) Required as part of QMS (Article 17)
Risk Documentation Risk analysis and management (164.308(a)(1)) Risk management system documentation (Article 9)
Technical Documentation System documentation for security controls Comprehensive technical documentation per Annex IV
Training Records Workforce training documentation (164.530(b)) AI literacy training records (Article 4)
Vendor Agreements Business Associate Agreements (164.308(b)) Contracts with deployers/downstream providers (Article 25)
Retention Period 6 years from creation or last effective date 10 years after system placed on market

Audit Trail Requirements

Both frameworks require audit trails, but with different focuses:

HIPAA Audit Controls

  • Who accessed PHI (user identification)
  • What PHI was accessed (records, data elements)
  • When access occurred (timestamps)
  • What action was taken (read, write, delete)
  • 6-year retention requirement

EU AI Act Article 12 Logging

  • Duration of each use (start/stop times)
  • Reference database used for input data
  • Input data that triggered search/match
  • Natural persons involved in verification
  • Logs for market surveillance inspection

Key insight: Healthcare AI systems need both types of logging. HIPAA logging tracks data access for privacy protection; EU AI Act logging tracks system behavior for safety and accountability. These are complementary, not duplicative.

Gap Analysis: What HIPAA Doesn’t Cover (EU AI Act Requirements)

Organizations with mature HIPAA compliance programs will find significant gaps when applying EU AI Act requirements. These are requirements with no HIPAA equivalent.

EU AI Act Requirements Not Covered by HIPAA

1. Risk Classification and Prohibited AI

HIPAA has no concept of AI risk tiers or prohibited AI practices. The EU AI Act’s Article 5 bans certain AI uses outright (social scoring, manipulative AI, untargeted facial recognition). HIPAA-compliant AI could be entirely prohibited under EU AI Act.

2. Conformity Assessment and CE Marking

HIPAA requires no third-party certification. The EU AI Act requires high-risk systems to undergo conformity assessment (internal control or notified body), maintain technical documentation per Annex IV, and affix CE marking before market placement.

3. Human Oversight Requirements

Article 14 requires high-risk AI systems to be designed for effective human oversight, including the ability to interrupt or override. HIPAA has no specific human oversight requirements for automated systems.

4. Transparency Disclosures

Article 13 requires instructions for use, intended purpose, capabilities and limitations, human oversight measures, and performance metrics. HIPAA’s Notice of Privacy Practices doesn’t cover AI-specific disclosures.

5. Post-Market Monitoring

Article 72 requires systematic post-market monitoring to collect and analyze data on AI system performance throughout its lifecycle. HIPAA has no equivalent ongoing monitoring requirement.

6. Fundamental Rights Impact Assessment

Article 27 requires deployers of high-risk AI to conduct fundamental rights impact assessments before deployment. HIPAA risk assessments focus on privacy and security, not broader rights impacts.

Gap Analysis: What EU AI Act Doesn’t Cover (HIPAA Requirements)

Conversely, organizations with mature EU AI Act compliance will find gaps when applying HIPAA requirements.

HIPAA Requirements Not Covered by EU AI Act

1. Protected Health Information Definition

HIPAA’s specific definition of PHI (18 identifiers) and de-identification standards (Safe Harbor, Expert Determination) have no EU AI Act equivalent. The AI Act defers to GDPR for personal data, which uses different criteria.

2. Business Associate Agreements

HIPAA’s BAA requirements specify contractual obligations for PHI handling by vendors. The EU AI Act has provider-deployer contracts (Article 25) but not specific data processing agreements—those fall under GDPR.

3. Administrative Safeguards

HIPAA’s detailed administrative safeguards (security official, workforce clearance procedures, information access management, security awareness training) have no direct EU AI Act equivalent.

4. Physical Safeguards

HIPAA’s physical safeguards (facility access controls, workstation security, device and media controls) are not addressed in the EU AI Act, which focuses on system behavior rather than infrastructure.

5. Individual Rights Specific to Health Data

HIPAA provides specific rights: access to medical records, amendment of records, accounting of disclosures, restrictions on use. The EU AI Act’s Article 86 right to explanation is narrower than HIPAA’s health-specific rights.

6. Breach Notification Specifics

HIPAA’s breach notification (60-day window, media notification for 500+ individuals, annual HHS reporting) differs from EU AI Act’s 15-day serious incident reporting. Both may apply independently.

Evidence Requirements Comparison

Both frameworks require organizations to maintain evidence of compliance. Understanding what evidence satisfies which framework is essential for efficient governance.

Evidence Requirements by Framework

Evidence Type HIPAA EU AI Act Unified Approach
Risk Assessments Security risk analysis documentation Risk management system outputs (Article 9) Integrated risk framework covering both data and AI risks
Policy Documentation Written policies and procedures QMS procedures per Article 17 Single policy set with framework-specific sections
Access Logs PHI access audit trails System operation logs per Article 12 Comprehensive logging covering both access and operations
Vendor Agreements Signed BAAs Provider-deployer contracts Combined agreements addressing both requirements
Training Records HIPAA training completion AI literacy training per Article 4 Combined training program with both modules
Incident Records Breach investigation documentation Serious incident reports per Article 73 Unified incident management with dual reporting paths
Technical Documentation System security documentation Annex IV technical file Comprehensive technical documentation satisfying both

Compliance Strategy for Dual-Jurisdiction Organizations

Organizations operating healthcare AI in both US and EU markets should adopt a unified approach rather than maintaining parallel compliance programs.

GLACIS logoGLACIS
Dual-Jurisdiction Strategy

Building a Unified Compliance Framework

1

Adopt a Base Framework

Start with ISO 42001 or NIST AI RMF as your foundational AI governance framework. These provide comprehensive structures that can accommodate both HIPAA and EU AI Act requirements. Map control requirements from both regulations to your base framework.

2

Implement the Stricter Requirement

Where requirements overlap but differ in stringency, implement the stricter one. EU AI Act’s 10-year documentation retention exceeds HIPAA’s 6-year requirement—use 10 years. HIPAA’s PHI access logging may be more specific than AI Act logging—use HIPAA’s specificity plus AI Act’s operational logging.

3

Address Framework-Specific Requirements

Build additional controls for requirements unique to each framework. EU AI Act requires conformity assessment, human oversight design, and post-market monitoring. HIPAA requires BAAs, specific individual rights handling, and healthcare-specific breach notification. These don’t overlap—you need both.

4

Create Unified Documentation

Maintain a single technical documentation set that satisfies both Annex IV requirements and HIPAA system documentation needs. Include framework-specific sections where required. Use consistent terminology and cross-references to demonstrate how controls map to both frameworks.

5

Establish Dual Reporting Paths

Implement incident management that can trigger both HIPAA breach notification (HHS OCR, 60 days) and EU AI Act serious incident reporting (national authority, 15 days). Train your response team on both pathways. An AI incident involving PHI may trigger both.

Strategic advantage: Organizations that build unified compliance frameworks position themselves for faster market entry in both jurisdictions, reduced audit burden, and more efficient ongoing governance. The investment in unified infrastructure pays dividends across all regulated markets.

How GLACIS Helps Satisfy Both Frameworks

GLACIS provides continuous attestation infrastructure that generates cryptographic evidence your AI controls execute correctly. This evidence maps to requirements from both HIPAA and the EU AI Act.

HIPAA Evidence

  • Access control verification for ePHI
  • Audit logging with tamper-evident records
  • Encryption status attestation
  • Security control execution proof

EU AI Act Evidence

  • Article 12 automatic logging attestation
  • Risk management system execution proof
  • Human oversight control verification
  • Post-market monitoring attestation

Unlike policy-based compliance tools that document what should happen, GLACIS generates cryptographic proof that controls actually execute. This evidence satisfies both HIPAA Security Rule requirements for demonstrable controls and EU AI Act requirements for operational logging and monitoring.

Frequently Asked Questions

Does HIPAA compliance satisfy EU AI Act requirements?

No. HIPAA and the EU AI Act have different focuses and requirements. HIPAA addresses privacy and security of Protected Health Information (PHI), while the EU AI Act addresses AI system safety, transparency, and fundamental rights. HIPAA compliance provides a foundation for some data governance and security requirements, but does not satisfy EU AI Act obligations for risk management, conformity assessment, technical documentation, human oversight, or transparency disclosures.

If my healthcare AI is FDA-cleared, do I still need EU AI Act compliance?

Yes. FDA clearance does not satisfy EU AI Act requirements. However, if your AI system qualifies as a medical device under EU MDR (2017/745), the EU AI Act provides for coordinated conformity assessment through existing notified body processes. You must still meet AI Act high-risk requirements (Articles 9-15), but the conformity assessment can be integrated with MDR certification. Medical AI devices have an extended deadline of August 2, 2027.

What documentation is required for both frameworks?

Both frameworks require extensive documentation, but with different focuses. HIPAA requires policies and procedures, risk assessments, BAAs, training records, and audit logs. The EU AI Act requires technical documentation (Annex IV), risk management records, data governance documentation, quality management system records, and conformity assessment documentation. Organizations should create unified documentation that satisfies both frameworks.

How do logging requirements compare?

HIPAA requires audit controls recording PHI access (who accessed what, when). The EU AI Act Article 12 requires automatic logging of AI system operation including inputs, outputs, and events relevant to identifying risks. EU AI Act logging is more specific to AI behavior (model version, inference details, decision traces), while HIPAA focuses on data access. Healthcare AI systems need both types of logging.

Which framework has stricter penalties?

The EU AI Act has higher maximum penalties. HIPAA civil penalties reach $2.1 million per year for willful neglect violations. The EU AI Act penalties reach €35 million or 7% of global annual turnover for prohibited AI practices, and €15 million or 3% of turnover for high-risk non-compliance. Both frameworks can result in operational restrictions and reputational damage beyond financial penalties.

Can I use a single governance framework for both?

Yes, and this is recommended. A unified AI governance framework based on ISO 42001 or NIST AI RMF can address requirements from both HIPAA and EU AI Act. The key is mapping controls to both frameworks, implementing the stricter requirement where they overlap, and adding controls for gaps. GLACIS helps organizations maintain a single source of truth for evidence that maps to multiple frameworks.

Unified Evidence for Dual-Jurisdiction Compliance

GLACIS generates cryptographic proof that your AI controls execute correctly—mapped to both HIPAA and EU AI Act requirements. One evidence infrastructure, multiple framework compliance.

Start Your Compliance Assessment

Related Guides