GLACIS·Healthcare AI·HIPAA·Updated April 24, 2026

HIPAA-compliant AI without the guesswork

BAAs, the Security Rule, and audit logging — and the gap between what your AI vendor claims and what you can actually prove. A working guide for healthcare teams already running AI in production.

By Joe Braidwood·45 min read·Last reviewed Apr 24, 2026
Apr 8, 2026
HHS OCR risk-management video — ongoing risk action required
Q1 2026
11th and 12th OCR Risk Analysis Initiative settlements announced
Jan 2026
Verisk/ISO CG 40 47 generative-AI exclusion forms went live
2026 (TBD)
Joint Commission · CHAI voluntary AI certification programme
Executive summary

AI is in your environment whether or not a team is watching it. The intersection of AI and healthcare creates real upside — clinical documentation, decision support, administrative load — and real exposure under HIPAA, state law, and state attorneys general. This guide covers what changed between January and April 2026, the actual current state of the BAA market, the Security Rule baseline, and the audit-logging gap that keeps catching healthcare teams in discovery.

There is no "HIPAA-certified AI." HIPAA compliance is an operational state, not a product attribute. The same model can be HIPAA-compliant in one deployment and non-compliant in another, depending on the controls around it.

Sharp HealthCare · Class action update Apr 2026

Filed Nov 26, 2025 in San Diego Superior Court (Saucedo v. Sharp HealthCare); names Sharp Rees-Stealy, SharpCare, and Sharp Community Medical Group. Identifies Abridge in court filings as the third-party vendor. Plaintiffs’ counsel estimates ~100,000 patient encounters were captured during rollout. Allegations: California Invasion of Privacy Act (CIPA) all-party-consent violations, CMIA violations, and fabricated consent records in patient charts. Case is in early pleading stage as of April 2026.[1] The structural problem the case exposes: when AI itself drafts the documentation, that documentation cannot prove consent unless an independent attestation exists alongside it. Read the high-risk classification guide →

By the numbers

~$2.13M annual penalty cap per identical violation tier (2026 inflation adjustment) · 6 years log retention required (45 CFR 164.530(j)) · 18 HIPAA Safe Harbor identifiers (45 CFR 164.514(b)(2)) · 60 days breach notification window · 76% of 2025 large breaches were hacking/IT incidents (HHS OCR April 2026 video).

Understanding HIPAA for AI systems

The Health Insurance Portability and Accountability Act of 1996 (HIPAA) establishes national standards for protecting sensitive patient health information. While HIPAA predates modern AI by decades, its requirements apply fully to AI systems that process, store, or transmit Protected Health Information (PHI). Understanding how HIPAA’s Privacy Rule, Security Rule, and Breach Notification Rule apply to AI is essential for any healthcare AI deployment.

The three HIPAA rules

The Privacy Rule (45 CFR Part 164, Subparts A and E) establishes standards for who may access PHI and under what circumstances. For AI systems, the Privacy Rule governs:

  • What patient information can be used to train AI models
  • When patient authorization is required for AI use cases
  • Minimum necessary standards for AI access to PHI
  • Patient rights to access, amend, and receive accounting of disclosures

The Security Rule (45 CFR Part 164, Subparts A and C) requires covered entities and business associates to implement administrative, physical, and technical safeguards to protect electronic PHI (ePHI). For AI systems, the Security Rule governs:

  • Encryption requirements for PHI in AI pipelines
  • Access controls for AI system users and administrators
  • Audit logging of AI inference activity
  • Integrity controls ensuring PHI is not improperly altered
  • Transmission security for API calls to AI services

The Breach Notification Rule (45 CFR Part 164, Subpart D) requires notification to affected individuals, HHS, and in some cases media outlets when unsecured PHI is breached. For AI systems, this includes:

  • Unauthorized access to training datasets containing PHI
  • Disclosure of PHI through AI model outputs or memorization
  • Security incidents affecting AI infrastructure
  • Vendor breaches involving PHI processed by AI systems

Covered entities vs. business associates

HIPAA distinguishes between Covered Entities (healthcare providers, health plans, and healthcare clearinghouses that transmit health information electronically) and Business Associates (entities that perform functions on behalf of covered entities involving PHI access).

Most AI vendors fall into the Business Associate category. When an AI company processes PHI on behalf of a hospital, clinic, or health plan, they become a Business Associate and must:

  • Sign a Business Associate Agreement (BAA) with the covered entity
  • Comply directly with applicable Security Rule requirements
  • Report security incidents and breaches to the covered entity
  • Ensure any subcontractors (sub-Business Associates) also comply

Key distinction: HIPAA compliance vs. HIPAA "certification"

There is no government-issued "HIPAA certification" for AI tools or any other technology. HHS does not certify, endorse, or approve products as HIPAA compliant. When vendors claim "HIPAA certification," they typically mean SOC 2 certification, HITRUST certification, or self-attestation of compliance. True HIPAA compliance is an ongoing operational state that includes policies, procedures, technical controls, training, and continuous monitoring.

Protected health information in AI systems

Understanding what constitutes PHI is fundamental to HIPAA compliant AI deployment. Protected Health Information includes any individually identifiable health information that is created, received, maintained, or transmitted by a covered entity or business associate.

The 18 HIPAA identifiers

HIPAA’s Safe Harbor de-identification method (45 CFR 164.514(b)(2)) specifies 18 types of identifiers that must be removed to consider data de-identified:

HIPAA Safe Harbor identifiers

# Identifier Type AI System Considerations
1NamesMust be stripped from training data and prompts
2Geographic data smaller than stateIncludes street addresses, city, zip codes (first 3 digits may be retained if population >20,000)
3Dates (except year)Birth dates, admission dates, discharge dates, death dates. Ages over 89 must be aggregated to "90+"
4Phone numbersIncluding contact numbers in clinical notes
5Fax numbersStill common in healthcare workflows
6Email addressesIncluding patient portal credentials
7Social Security numbersCritical identifier requiring redaction
8Medical record numbersEHR system identifiers
9Health plan beneficiary numbersInsurance member IDs
10Account numbersBilling and financial identifiers
11Certificate/license numbersDriver’s licenses, professional licenses
12Vehicle identifiers and serial numbersIncluding license plates
13Device identifiers and serial numbersMedical devices, implants
14Web URLsPatient portal links, imaging URLs
15IP addressesEHR access logs, telehealth sessions
16Biometric identifiersFingerprints, voice prints, retinal scans, facial geometry
17Full-face photographsClinical images, ID photos
18Any other unique identifierCatch-all for identifiers not listed above

PHI in AI training data

Using PHI to train AI models requires careful consideration of HIPAA requirements. There are three primary approaches:

1. De-identification: Remove all 18 identifiers per Safe Harbor method, or use Expert Determination (45 CFR 164.514(b)(1)) where a qualified statistical expert certifies that re-identification risk is very small. De-identified data is no longer PHI and is not subject to HIPAA.

2. Authorization: Obtain individual patient authorization to use their PHI for AI training. This is rarely practical at scale but may be appropriate for specialized research use cases.

3. Healthcare Operations: Under 45 CFR 164.506, covered entities may use PHI for healthcare operations without patient authorization. Quality improvement, developing clinical guidelines, and training algorithms that improve care quality may qualify. However, sharing PHI with external AI vendors for training typically requires a BAA and may have additional restrictions.

Critical: model memorization risk

Large language models can memorize and reproduce training data, including PHI. Research has demonstrated extraction of verbatim training data from models like GPT-2, GPT-3, and others. If you train or fine-tune models on PHI, consider differential privacy techniques, membership inference testing, and ongoing monitoring for data extraction attacks. Model memorization of PHI could constitute a breach.

PHI in AI prompts and outputs

Beyond training data, PHI commonly enters AI systems through:

  • User prompts: Clinicians entering patient information for clinical decision support, documentation, or coding assistance
  • System context: Automated systems that provide patient records as context for AI analysis
  • AI outputs: Generated text, predictions, or recommendations that may contain or derive from PHI
  • Logging: API logs, debugging information, and audit trails that capture PHI in transit

Each of these PHI touchpoints must be protected with appropriate Security Rule safeguards.

Business Associate Agreements for AI

The Business Associate Agreement (BAA) is the legal foundation of HIPAA compliant AI. When an AI vendor will receive, create, maintain, or transmit PHI on behalf of a covered entity, they become a Business Associate and a BAA is mandatory.

Required BAA provisions

Under 45 CFR 164.504(e), a BAA must include provisions that:

  • Establish permitted and required uses and disclosures of PHI
  • Require the Business Associate to use appropriate safeguards and comply with the Security Rule
  • Require reporting of security incidents and breaches
  • Ensure any subcontractors agree to the same restrictions
  • Make PHI available for patient access and amendment requests
  • Make internal practices available to HHS for compliance review
  • Return or destroy PHI at termination
  • Authorize termination if the Business Associate violates the agreement

AI-specific BAA considerations

Standard BAA templates may not adequately address AI-specific concerns. When negotiating BAAs with AI vendors, ensure coverage of:

AI-specific BAA provisions

  • Model training: Explicit prohibition or permission for using PHI to train models, with requirements for de-identification if permitted
  • Data retention: How long prompts, outputs, and logs containing PHI are retained, and procedures for deletion
  • Sub-processors: Identification of sub-Business Associates (cloud providers, inference infrastructure) and their BAA coverage
  • Data residency: Geographic location of PHI processing and storage, particularly for international vendors
  • Audit access: Right to audit AI-specific controls, including model behavior and data handling
  • Breach definitions: Whether model memorization or extraction of training data constitutes a breach

AI vendor BAA market — April 2026

Not every AI vendor offers a BAA, and the ones that do scope it carefully by product tier. The table below reflects vendor pages and Trust Center documentation as of April 2026. Cross-check current vendor BAA pages before signing — terms move quarterly.

Provider BAA available Products covered What changed by April 2026
Microsoft Azure Yes Azure OpenAI Service, Azure AI Foundry, Cognitive Services, Azure ML Part of standard Azure BAA. Covers current GPT‑5, GPT‑4o, GPT‑4.1, and o-series deployments; reasoning-model availability tier-gated by region.
Amazon Web Services Yes Bedrock, SageMaker, Comprehend Medical, HealthLake Bedrock BAA covers Claude 3.7 family, Nova, Titan, Llama, and Mistral on Bedrock. Anthropic operates under the AWS BAA for Bedrock customers.
Google Cloud Yes Vertex AI, Healthcare API, MedLM, Cloud Natural Language Standard Google Cloud BAA covers Gemini 2.5/2.0 family on Vertex AI. Healthcare API and MedLM are explicitly designed for PHI workflows.
OpenAI (direct) Enterprise / sales-managed only ChatGPT Enterprise, ChatGPT Edu, OpenAI API (via [email protected]) 2026: launched OpenAI for Healthcare — rolling out to AdventHealth, Baylor Scott & White, Boston Children’s, Cedars-Sinai, HCA, MSK, Stanford Children’s, UCSF. No BAA for ChatGPT Plus, Business, Teams, or Free.
Anthropic (Claude) HIPAA-ready Enterprise / API only First-party Claude API; HIPAA-ready Claude Enterprise (sales-assisted) BAA does not cover Free, Pro, Max, Team, Workbench, Console, Cowork, or Claude for Office. Anthropic also operates under technology-partner BAAs at AWS Bedrock, Google Cloud, and Azure.
Cohere Yes Cohere Enterprise BAA available on Enterprise tier; verify current scope at signing.
Mistral AI Enterprise contract Direct enterprise deployment; Bedrock and Azure AI Foundry distribution Cloud-distribution channels covered by host BAA. Direct BAA available on enterprise contracts.
Meta (Llama) N/A — open source Self-hosted Llama deployments No vendor BAA. Self-hosting inherits the host-cloud BAA. You hold the Security Rule responsibility.
xAI (Grok) No public programme Use only via Azure AI Foundry / AWS partial channels As of April 2026 there is no published xAI BAA programme. Cloud-distributed access falls under the host BAA where eligible.
Hugging Face No Self-hosted only Inference Endpoints / Spaces are not HIPAA-eligible. Self-hosted deployments inherit the host’s BAA.
2026 update

Both Anthropic and OpenAI shipped healthcare-specific products in Q1 2026, with BAAs and EHR / coverage-data integrations announced for named launch partners. Read the linked vendor pages directly before scoping a deployment — the BAA terms moved noticeably between January and April 2026.[OpenAI]

BAA ≠ compliance

Having a signed BAA is necessary but not sufficient for HIPAA compliance. The BAA shifts some liability to the vendor, but the covered entity remains responsible for ensuring the AI is used appropriately, proper safeguards are in place, and the deployment meets minimum necessary standards. You cannot outsource your compliance responsibility through a BAA.

Security Rule requirements for AI systems

The HIPAA Security Rule (45 CFR Part 164, Subpart C) requires covered entities and business associates to implement safeguards ensuring the confidentiality, integrity, and availability of electronic PHI. For AI systems, these requirements translate to specific technical and organizational controls.

Administrative safeguards (§164.308)

Administrative safeguards are policies and procedures governing AI system deployment:

  • Risk Analysis (§164.308(a)(1)(ii)(A)): Conduct thorough risk analysis of AI systems, including data flows, access patterns, and potential threats. Document risks specific to AI—model extraction, prompt injection, training data exposure.
  • Risk Management (§164.308(a)(1)(ii)(B)): Implement measures to reduce identified risks to reasonable levels. For AI, this includes input validation, output filtering, and monitoring for anomalous behavior.
  • Workforce Training (§164.308(a)(5)): Train staff on AI-specific HIPAA requirements—what can and cannot be entered into AI prompts, how to handle AI outputs containing PHI, incident reporting procedures.
  • Contingency Planning (§164.308(a)(7)): Include AI systems in disaster recovery and business continuity plans. Consider AI service outages, vendor failures, and data recovery procedures.

Physical safeguards (§164.310)

Physical safeguards protect the physical infrastructure where AI systems operate:

  • Facility Access Controls (§164.310(a)): For on-premise AI deployments, limit physical access to servers and storage. For cloud deployments, verify vendor’s physical security controls.
  • Workstation Security (§164.310(c)): Protect workstations used to access AI systems. Consider screen privacy, automatic lockout, and restrictions on copying AI outputs containing PHI.
  • Device and Media Controls (§164.310(d)): Secure disposal of hardware that processed PHI through AI systems, including GPUs and storage devices.

Technical safeguards (§164.312)

Technical safeguards are the security technologies protecting AI systems and PHI:

Access controls (§164.312(a))

  • Unique User Identification: Each AI system user must have unique credentials—no shared accounts.
  • Emergency Access Procedures: Document how to access AI systems in emergencies while maintaining accountability.
  • Automatic Logoff: AI interfaces must timeout after inactivity periods appropriate to the clinical environment.
  • Encryption and Decryption: Implement encryption for PHI stored in AI system databases, caches, and logs.

Audit controls (§164.312(b))

Implement mechanisms to record and examine AI system activity. This is particularly important for AI and often underimplemented—see the dedicated section below.

Integrity controls (§164.312(c))

  • Data Integrity: Protect PHI from improper alteration or destruction. Ensure AI outputs don’t corrupt source records.
  • Authentication: Verify that PHI received from AI systems has not been altered in transit.

Transmission security (§164.312(e))

  • Encryption: All API calls to AI services must use TLS 1.2 or higher. Verify certificate validation and reject downgrade attacks.
  • Integrity Controls: Implement message authentication to detect tampering with PHI in transit.

Encryption standards

While HIPAA doesn’t mandate specific encryption algorithms, OCR guidance and industry standards establish clear expectations:

Recommended Encryption standards for HIPAA AI

  • Data at Rest: AES-256 encryption for all PHI in databases, file storage, caches, and logs
  • Data in Transit: TLS 1.2 minimum (TLS 1.3 preferred) for all API communications with AI services
  • Key Management: Use HSMs or cloud KMS for encryption key storage; implement key rotation procedures
  • Certificate Management: Validate TLS certificates; implement certificate pinning where appropriate

Audit logging for AI systems: the compliance gap

Audit logging is where most AI deployments fall short of HIPAA requirements. The Security Rule requires audit controls that record and examine activity in information systems containing or using ePHI (§164.312(b)). For AI systems, this creates specific challenges that standard logging infrastructure doesn’t address.

The AI logging problem

Traditional application logging captures access events—who logged in, what records they viewed. But AI systems require inference-level logging that captures:

  • What PHI was sent to the AI — The actual content of prompts and context windows
  • What the AI returned — Generated text, predictions, or recommendations
  • Who initiated the query — User identification and authentication context
  • When and where — Timestamps, session identifiers, client information
  • Which model was used — Model version, configuration parameters
  • What happened next — Whether outputs were used, modified, or discarded

Most AI platforms provide only basic access logs—they record that an API call occurred, but not the content of that call. This creates a fundamental compliance gap: if you can’t demonstrate what PHI was processed and what the AI output was, you can’t prove compliance or respond effectively to audits or incidents.

HIPAA log retention requirements

HIPAA requires retention of documentation for six years from the date of creation or the date when the document was last in effect (45 CFR 164.530(j)). This includes:

  • Policies and procedures governing AI use
  • Risk assessments including AI systems
  • Audit logs of AI system activity
  • Training records for staff using AI with PHI
  • BAAs with AI vendors

For AI inference logs containing PHI, this creates tension between retention requirements and data minimization principles. Organizations must balance compliance documentation needs against the risk of retaining PHI longer than operationally necessary.

Implementing compliant AI logging

AI audit logging architecture

  • Local instrumentation layer: Implement middleware that records the runtime event, control decision, hashes, and verification metadata without exporting prompts or responses
  • Secure Storage: Store logs in encrypted, tamper-evident storage with write-once or append-only guarantees
  • Access Controls: Restrict log access to authorized security and compliance personnel with separate authentication
  • Integrity Protection: Implement cryptographic hashing or blockchain-style chaining to detect log tampering
  • Retention Automation: Automate 6-year retention and secure disposal after retention period expires

Common HIPAA violations with AI

Understanding common violations helps organizations avoid them. These patterns emerge repeatedly in healthcare AI deployments:

1. Consumer AI tools with PHI

The most prevalent violation: healthcare workers using ChatGPT, Claude, Gemini, or other consumer AI tools to process patient information. This includes:

  • Pasting clinical notes into ChatGPT to summarize patient encounters
  • Asking Claude to draft referral letters containing patient details
  • Using AI to translate patient communications
  • Getting clinical decision support from consumer AI tools

Why it’s a violation: Consumer AI tools don’t offer BAAs, may use input data for model training, retain prompts for extended periods, and lack the security controls required by HIPAA. Even if an individual clinician doesn’t intend to violate HIPAA, inputting PHI into these systems creates unauthorized disclosure.

2. Missing or inadequate BAAs

Organizations assume that because a vendor is "healthcare-focused" or "enterprise-grade," they have automatic HIPAA coverage. Common gaps:

  • Using AI services without any BAA in place
  • BAA that covers cloud infrastructure but not AI-specific services
  • BAA with the parent company that doesn’t extend to AI product subsidiaries
  • Outdated BAA that predates AI service offerings

3. Inadequate logging and accountability

Deploying AI systems without capturing the audit trail required by HIPAA:

  • No logging of AI prompts and responses
  • Logs that capture access but not content
  • Logs stored in ephemeral systems without retention controls
  • Inability to provide accounting of AI disclosures upon patient request

4. Shadow AI deployments

Individual departments or clinicians deploying AI tools without IT or compliance review:

  • Radiology using AI diagnostic tools without security assessment
  • Clinical research teams using LLMs to analyze patient data
  • Administrative staff using AI for medical coding or billing
  • Telehealth platforms adding AI features without compliance review

5. Training data exposure

Improper handling of PHI in AI model training:

  • Training on PHI without proper de-identification
  • Sharing PHI with AI vendors for model training without authorization
  • Model memorization of PHI that can be extracted through prompting
  • Failure to assess re-identification risk in training datasets

Case study: OCR settlements relevant to AI deployments

Real OCR action — relevant pattern

OCR has not yet announced an enforcement action specifically about AI, but the structural failure modes that drive 2025–2026 settlements all map directly to AI deployments: incomplete risk analysis, inadequate Business Associate oversight, missing audit logs, and access-control failures. The Texas AG settlement with Pieces Technologies (Sept 2024) — over inflated hallucination-rate claims, with no monetary penalty but indefinite compliance demands — is the closest analog and the playbook other state AGs are likely to use against AI vendors making accuracy claims.

Architectural patterns for HIPAA-compliant AI

There are several approaches to using AI with PHI while maintaining HIPAA compliance. Each has trade-offs in complexity, cost, capability, and risk profile.

Pattern 1: enterprise AI with BAA

The simplest compliant pattern: use enterprise AI services from vendors offering BAAs (Azure OpenAI, AWS Bedrock, Google Vertex AI). PHI flows to the AI provider, which is covered under the BAA.

Advantages

  • Simplest to implement
  • Leverage provider’s security controls
  • Access to latest models
  • Clear liability framework via BAA

Disadvantages

  • PHI leaves your environment
  • Dependent on provider compliance
  • Potentially higher per-query costs
  • Limited customization options

Pattern 2: PHI redaction / de-identification proxy

Deploy a proxy layer that strips PHI before sending data to AI, then re-inserts it in the response. The AI never sees actual PHI.

Advantages

  • PHI never leaves your control
  • Can use any AI provider (no BAA needed)
  • Reduced compliance scope for AI vendor
  • Defense in depth protection

Disadvantages

  • Complex to implement correctly
  • May reduce AI quality for some use cases
  • Risk of incomplete de-identification
  • Adds latency and infrastructure complexity

Learn more in our technical deep-dive: How We Used AI on Patient Data Without a BAA.

Pattern 3: on-premise / self-hosted AI

Deploy AI models within your own infrastructure using open-source models (Llama, Mistral) or licensed on-premise solutions. PHI never leaves your environment.

Advantages

  • Maximum control over data
  • No external BAA required for model
  • Can fine-tune on proprietary data
  • Potentially lower long-term costs at scale

Disadvantages

  • Significant infrastructure investment
  • May not match cloud AI quality
  • Requires ML operations expertise
  • Full security responsibility retained

Pattern 4: hybrid architecture

Combine approaches based on use case sensitivity. Use enterprise AI with BAA for general clinical workflows, add de-identification for highly sensitive cases, and deploy on-premise for research and model development.

Evaluating AI vendors for HIPAA compliance

When evaluating AI vendors for healthcare use, a systematic assessment process ensures you select partners capable of supporting compliant deployments.

HIPAA AI vendor evaluation checklist

Documentation and agreements

  • BAA available and executed covering AI services specifically
  • BAA covers all sub-processors (cloud infrastructure, inference providers)
  • Data processing addendum specifying PHI handling procedures
  • Security documentation (SOC 2 Type II, HITRUST, penetration test results)
  • Insurance coverage including cyber liability

Technical controls

  • Encryption at rest (AES-256) for all PHI including prompts and logs
  • Encryption in transit (TLS 1.2+) for all API communications
  • Unique user identification with MFA support
  • Role-based access controls for administrative functions
  • Comprehensive audit logging including inference-level records
  • Log retention meeting 6-year HIPAA requirement

Data handling

  • Clear policy on PHI use for model training (ideally prohibited without explicit consent)
  • Data residency options (US-only processing for PHI)
  • Data retention policies with configurable retention periods
  • Secure deletion procedures at contract termination
  • Tenant isolation in multi-tenant environments

Incident response

  • Documented incident response procedures
  • Breach notification within HIPAA timelines (60 days to affected individuals)
  • Security incident SLAs (e.g., notification within 24 hours)
  • Post-incident analysis and remediation procedures

OCR enforcement trends and AI

The Office for Civil Rights (OCR) within HHS enforces HIPAA. Reading OCR’s enforcement signal correctly helps healthcare teams focus compliance work where it actually matters.

OCR enforcement priorities — 2024 through April 2026

  • Risk-analysis failures. Still the most-cited HIPAA violation. The OCR Risk Analysis Initiative announced its 11th and 12th settlements in early 2026, following 16 settlements in 2025. Notable 2026 actions: MMG Fusion, LLC (~15M individuals affected), Top of the World Ranch Treatment Center ($103,000 + corrective action plan), and Cadia Healthcare Facilities.[OCR]
  • Hacking and IT incidents. ~76% of large breaches in 2025 were hacking / IT incidents (HHS OCR April 2026 video). Ransomware, phishing, and system vulnerabilities remain the biggest exposure surface.
  • Business Associate oversight. Covered entities are still being cited for failing to ensure Business Associates — including AI vendors — comply with applicable Security Rule requirements.
  • Access controls. Inadequate authentication, shared credentials, and excessive access privileges.

HIPAA Security Rule NPRM (Jan 2025) — current status

HHS published a Notice of Proposed Rulemaking on Jan 6, 2025 that would significantly strengthen the Security Rule — mandatory asset inventories, encryption, MFA, regular testing, and formal incident-response plans. Two weeks later the Trump administration’s Jan 2025 Regulatory Freeze paused federal rulemaking. As of April 2026 the rule’s fate is uncertain. Track it closely — the proposal effectively raises the floor for what "reasonable" means in a Security Rule audit, even before it’s final.[Federal Register]

AI-specific guidance and the April 2026 OCR risk-management video

OCR’s December 2023 guidance on HIPAA and AI — covered entities must run risk analyses before deploying AI that touches PHI; BAAs are required; workforce training must include AI-specific risks; PHI used for AI training is still PHI — remains in force. On April 8, 2026 OCR released a follow-up risk-management video reiterating that risk management is an ongoing operational practice, not a one-time documentation exercise. OCR specifically warned that organizations that identify risks and then fail to act face civil monetary penalties for willful neglect not corrected within 30 days.

No specifically AI-named OCR enforcement action has issued through April 2026, but the Texas AG’s 2024 settlement with Pieces Technologies (over inflated hallucination-rate claims) is the closest analog and the playbook other state AGs are likely to use.

ASTP / ONC HTI rules and predictive DSI

The HTI-1 final rule (Dec 2023) introduced the certification criterion at 170.315(b)(11) for predictive Decision Support Interventions (predictive DSIs) — 13 source attributes for evidence-based DSIs and 31 for predictive DSIs (intended use, target population, training data, validation, known risks, etc.). Compliance deadline for certified Health IT Modules was Jan 1, 2025.

On Dec 29, 2025 ASTP/ONC issued the HTI-5 proposed rule, which would amend the DSI criterion to eliminate both the source-attribute disclosure requirement and the predictive-DSI risk-management requirement. Public comment closed Feb 27, 2026; no final rule has issued as of April 24, 2026. Practical reading: predictive-DSI source attributes are still currently required of certified Health IT, but the regulatory wind has reversed. Plan for both outcomes.[Covington]

Penalty structure

Culpability tier Per-violation range Annual cap (per identical violation)
Unknown (despite reasonable diligence) $137 – $71,162 ~$2.13M
Reasonable cause (not willful neglect) $1,424 – $71,162 ~$2.13M
Willful neglect, corrected within 30 days $14,232 – $71,162 ~$2.13M
Willful neglect, not corrected $71,162 – $2.13M ~$2.13M

Penalties are adjusted annually under 45 CFR 102. Figures above are approximate 2026 inflation-adjusted; verify against the latest HHS adjustment table at signing or response time.

Beyond HIPAA: emerging AI regulations

HIPAA is the floor, not the ceiling. By April 2026, several adjacent regimes are already shaping how healthcare AI gets bought, deployed, and litigated.

California AI in healthcare — in force now

AB 3030 (effective Jan 1, 2025) requires that any patient-facing communication generated by generative AI carries a prominent disclosure that the message was AI-generated, plus instructions for contacting a human provider. Communications reviewed by a human before sending are exempt. Civil penalty up to $25,000 per violation at licensed facilities.

SB 1120 (effective Jan 1, 2025) restricts AI in payer utilization review: only a licensed physician or qualified clinician may make medical-necessity determinations. The Department of Managed Health Care (DMHC) audits denial rates and AI transparency. The Medical Board of California’s GenAI Notification page provides physician-facing guidance.

FDA AI/ML medical devices and PCCPs

The FDA finalized its PCCP guidance for AI-enabled device software functions in December 2024, expanding scope from ML to all AI-enabled devices. PCCPs require three components: a Description of Modifications, a Modification Protocol, and an Impact Assessment. In August 2025 FDA, Health Canada, and UK MHRA jointly published five PCCP guiding principles. A separate broader PCCP draft guidance for all medical devices was in comment as of April 2026.[FDA]

EU AI Act — high-risk for clinical AI

The EU AI Act classifies a wide range of healthcare AI as high-risk under Annex III, requiring conformity assessment, technical documentation, human oversight, post-market monitoring, and Article 12 logging. The current applicability date for Annex III obligations is Aug 2, 2026; the Digital Omnibus on AI in trilogue as of April 2026 proposes conditional delays into 2027 or 2028 tied to harmonized-standards availability. See the ambient-scribe high-risk classification guide.

Colorado AI Act

The Colorado AI Act (SB 24-205) as enacted requires developers and deployers of high-risk AI to use reasonable care to prevent algorithmic discrimination. Healthcare AI making consequential decisions (treatment recommendations, coverage determinations) is in scope. Requirements include impact assessments, risk-management policies, and consumer disclosures. Status as of May 2026: the Act’s June 30, 2026 effective date is intact, but enforcement is stayed by federal court in xAI v. Weiser, and Polis-backed replacement bill SB 26-189 — effective Jan 1, 2027 if enacted — would pivot to a disclosure regime that drops mandatory impact assessments. Either way, the underlying healthcare AI evidence work is the same.

Joint Commission and CHAI

In September 2025 the Joint Commission and the Coalition for Health AI (CHAI) released Initial Guidance on Responsible Use of AI in Healthcare — seven core elements covering governance, transparency, ongoing quality monitoring. A voluntary AI certification programme is planned for 2026; not yet an accreditation requirement, but the trajectory is clear.[Joint Commission]

State AGs and the Pieces Technologies precedent

Texas AG settled with Pieces Technologies in September 2024 over inflated hallucination-rate claims (no monetary penalty, but Pieces must disclose harmful uses, document training data, disclose limitations, and submit to compliance demands indefinitely). California, New York, and Massachusetts AGs each have AI / consumer-protection units that read the same playbook. State-AG action under deceptive-practice statutes is the most likely fast-moving enforcement vector for healthcare AI accuracy claims.

Insurance — the hard constraint

Effective January 2026, Verisk/ISO Core Lines released endorsement forms CG 40 47 (Coverage A and B; occurrence and claims-made) and CG 40 48 (Coverage B), explicit exclusions for generative-AI exposures from commercial general liability. Adoption by carriers is optional but reportedly strong. Standalone AI carriers (Munich Re aiSure, Armilla via Lloyd’s) write the risk back in — but only against verifiable governance evidence. Healthcare-specific E&O / cyber tiering is following the same pattern. The market is filtering on attestation, not policy documentation.

FTC enforcement

The FTC has enforcement authority over deceptive and unfair AI practices: misleading capability claims, algorithmic discrimination, inadequate data security. Increased scrutiny of healthcare AI is a stated priority.

HIPAA-compliant AI implementation roadmap

Whether you’re a healthcare organization deploying AI or an AI vendor entering the healthcare market, this roadmap provides a structured path to compliance:

GLACIS logoGLACIS
GLACIS Framework

HIPAA AI compliance sprint

1

AI system inventory (week 1)

Catalog all AI systems in use or planned for deployment. Identify which systems will process PHI, what PHI elements they access, and what the data flows look like. Include shadow AI—tools staff may be using without formal approval.

2

Risk assessment (weeks 2–3)

Conduct HIPAA risk analysis for each AI system. Document threats specific to AI: prompt injection, model extraction, training data leakage, output disclosure. Assess current controls and identify gaps.

3

Vendor assessment and BAAs (weeks 4–6)

Evaluate AI vendors using the checklist above. Negotiate and execute BAAs for all vendors processing PHI. Ensure BAAs specifically cover AI services and address training data, logging, and retention.

4

Technical controls (Weeks 7-10)

Implement Security Rule safeguards for AI systems. Configure encryption, access controls, and audit logging. Deploy inference-level logging infrastructure. Establish monitoring and alerting for security events.

5

Policies and training (weeks 11–12)

Develop AI-specific HIPAA policies: acceptable use, prohibited activities (consumer AI with PHI), incident reporting. Train workforce on AI policies. Document training completion.

6

Continuous monitoring (ongoing)

Establish ongoing monitoring of AI system security. Review audit logs regularly. Conduct periodic risk assessments (annually minimum). Update policies as AI technology and regulations evolve.

Evidence over documentation: Focus on generating verifiable evidence that controls are working, not just policies stating what should happen. Cryptographic attestations, tamper-proof logs, and testable controls demonstrate compliance more effectively than policy documents.

Frequently asked questions

Can I use ChatGPT or Claude with patient information?

Standard consumer versions (ChatGPT Free, Plus, Teams; Claude Free, Pro) should never be used with PHI. These services don’t offer BAAs, may use your data for training, and lack the security controls required by HIPAA. Enterprise versions with BAAs (ChatGPT Enterprise, Azure OpenAI) can be used as part of a HIPAA-compliant architecture, but you must still implement appropriate safeguards.

Is there a list of HIPAA-certified AI tools?

No. There is no government-issued HIPAA certification for any product, including AI tools. HIPAA compliance is an operational state, not a product attribute. Any vendor claiming "HIPAA certification" is using shorthand for "we have controls enabling compliant deployment"—not an official certification. Always verify BAA availability and assess specific security controls.

Do I need a BAA with every AI vendor?

You need a BAA with any vendor that will access, process, store, or transmit PHI on your behalf. If you use a de-identification proxy that removes all PHI before data reaches the AI vendor, the vendor may not need a BAA (since they never receive PHI). However, this architecture is complex to implement correctly and requires rigorous validation.

Can PHI be used to train AI models?

PHI can be used for AI training under specific conditions: (1) proper de-identification per HIPAA Safe Harbor or Expert Determination methods, after which it’s no longer PHI; (2) patient authorization; or (3) healthcare operations purposes with appropriate safeguards and BAA coverage. Sharing PHI with external vendors for training requires careful analysis and typically requires authorization. Watch for model memorization risks.

What logging is required for AI systems?

The Security Rule requires audit controls recording activity in systems containing ePHI. For AI, this includes: who initiated queries, what PHI was sent to the AI, what the AI returned, timestamps, and session information. Logs must be retained for 6 years and protected from tampering. Most AI platforms provide only basic access logs—you may need to implement additional logging infrastructure.

What if staff are already using consumer AI with PHI?

This is a HIPAA violation that should be addressed immediately: (1) Issue clear policy prohibiting consumer AI use with PHI; (2) Communicate policy to all workforce members; (3) Conduct training on AI-specific HIPAA requirements; (4) Assess whether a breach occurred and determine notification obligations; (5) Deploy approved AI alternatives that meet HIPAA requirements; (6) Document remediation efforts.

How do I evaluate if an AI vendor is HIPAA compliant?

Request and review: (1) BAA covering AI services specifically; (2) Security documentation (SOC 2 Type II, HITRUST); (3) Technical specifications for encryption, access controls, logging; (4) Data handling policies including training data use; (5) Incident response procedures; (6) Sub-processor list and their compliance status. Conduct your own security assessment and include AI systems in your HIPAA risk analysis.

Does HIPAA apply to AI-generated clinical notes?

Yes. AI-generated content that contains or is derived from PHI is itself PHI and subject to HIPAA protections. This includes AI-drafted clinical notes, summaries, recommendations, and any other outputs incorporating patient information. The clinical provider who reviews and signs the note bears responsibility for its accuracy and appropriate handling.

Key takeaways

  • There is no "HIPAA certified AI" — compliance depends entirely on how AI is deployed and operated
  • BAAs are required when AI vendors handle PHI—but a BAA alone isn’t sufficient for compliance
  • Consumer AI tools are never HIPAA compliant for use with PHI—period
  • Logging is critical — you need inference-level audit trails, not just access logs
  • Evidence over documentation — focus on demonstrating controls work, not just having policies
  • New regulations are coming — HIPAA is the floor, not the ceiling for healthcare AI compliance

For more on building the evidence infrastructure that supports both HIPAA compliance and emerging AI regulations, explore our other resources:

Evidence pack · Article 12 logging · Runtime security assessment

Get your healthcare AI under control before the next audit, contract, or lawsuit.

Healthcare buyers, OCR investigators, and plaintiffs’ attorneys all ask the same question: can you prove your controls executed? GLACIS produces the cryptographically attested evidence that answers it — without exporting PHI, without month-long audits, without rebuilding what you already shipped.

Build your evidence pack Healthcare AI readiness check

Related guides

Ambient AI · EU
Is your ambient scribe high-risk?
Annex III, Article 6, MDR, Sharp HealthCare class action.
Privacy
Ambient AI scribe privacy
Consent, CIPA liability, Sharp HealthCare lawsuit.
Technical
AI on patient data without a BAA
PHI redaction proxy architecture deep dive.
Regulation
Colorado AI Act
First US-state comprehensive AI law for high-risk systems.
EU
EU AI Act compliance guide
Risk categories, Article 12 logging, conformity assessment.
Crosswalk
EU AI Act vs HIPAA
Healthcare AI compliance across jurisdictions.