JPM San Francisco 2026 Read Briefing
JPM 2026 Briefing • January 12–15

Healthcare AI Compliance Briefing

What every healthcare AI vendor and health system executive needs to know heading into 2026. State laws, consent litigation, and governance committee priorities.

15 min read 3,500 words

Executive Summary

Healthcare AI enters 2026 facing a convergence of regulatory deadlines, precedent-setting litigation, and increasingly sophisticated governance committee scrutiny. This briefing covers what matters most for JPM conversations:

  • State AI laws — Colorado AI Act takes effect February 2026; organizations need to start now
  • Consent litigation — Sharp HealthCare lawsuit creates ambient AI scribe liability template
  • HIPAA gaps — Standard BAAs don’t cover AI-specific risks; model training exposure
  • Governance shift — Committees now asking for evidence, not just attestations

2025–2026 Regulatory Timeline

Healthcare AI faces a compressed compliance window. Here’s what’s coming and when:

Q1 2025 — FDA Guidance Finalization

FDA expected to finalize guidance on AI/ML-enabled medical devices, including continuous learning systems.

June 30, 2026 — Colorado AI Act Takes Effect

First major state AI law. Requires risk assessments, impact statements, and consumer disclosures for "high-risk AI systems" including healthcare decisions. (Delayed from February 2026 per SB 25B-004.)

August 2, 2026 — EU AI Act Obligations Begin

High-risk AI system requirements fully applicable. Any US company with EU patients or customers must comply.

2026–2027 — State Law Cascade

Connecticut, Texas, Illinois, and others expected to follow Colorado’s model. Patchwork compliance challenge emerging.

Key Insight for JPM

Organizations typically need 6–12 months to implement AI governance programs that meet Colorado AI Act requirements. With Colorado’s June 2026 deadline approaching, organizations that haven’t started face significant execution risk.

Ambient AI scribes have become one of healthcare’s most widely-adopted AI categories. They also became a significant liability exposure in 2025.

Sharp HealthCare: The Template Case

Filed November 2025, Saucedo v. Sharp HealthCare alleges that since Abridge’s deployment in April 2025, over 100,000 patient encounters may have been recorded without adequate consent. The complaint includes several allegations that create templates for future litigation:

Potential Liability Exposure

If courts count each recorded patient encounter as a separate CIPA violation, the $5,000 statutory penalty could result in exposure exceeding $500 million based on plaintiff’s estimates. Even at a fraction of this figure, exposure would dwarf typical HIPAA penalties.

The "Capability Test" Expansion

Perhaps more concerning is the emerging "capability test" from Ambriz v. Google LLC (2025). The court held that an AI vendor can be liable as a third-party eavesdropper if it merely possesses the technical capability to use intercepted data for its own purposes—regardless of whether it actually exercises that capability.

Most AI vendors’ terms of service reserve rights to use customer data for model training. Under the capability test, this reservation alone may create CIPA liability even if training never occurs.

HIPAA’s AI Blind Spots

HIPAA was written for fax machines and filing cabinets. While its principles apply to AI, significant gaps exist:

BAA Coverage Gaps

Risk Area Standard Cloud BAA AI-Specific BAA
Model training on PHI Not addressed Explicit prohibition or consent requirement
Inference logging Basic access logs only Full input/output audit trail
Subprocessor AI models Generic subprocessor clause Named models, version control
Hallucination liability Not addressed Accuracy disclaimers, liability allocation
Breach definition Standard PHI breach Includes AI-specific incidents (bias, manipulation)

The Audit Trail Problem

HIPAA’s Security Rule requires audit controls for PHI access. For AI systems, this means logging not just who accessed data, but what the AI did with it:

Most AI platforms provide application-level access logs. They don’t provide inference-level audit trails. This creates a compliance gap that governance committees are increasingly identifying.

What Governance Committees Are Asking

Health system AI governance committees have evolved rapidly. In 2023, they asked if you had policies. In 2024, they asked about your certifications. In 2025, they’re asking for evidence.

The New Questions

Based on conversations with health system CISOs, CMIOs, and compliance officers, here are the questions that now regularly appear in procurement reviews:

  1. "Can you demonstrate that PHI was not used to train your model?" — Not a policy statement; actual technical evidence
  2. "What happens if your AI hallucinates clinical information?" — Looking for incident response, not just disclaimers
  3. "How do you prove content filtering ran on a specific patient interaction?" — The execution evidence question
  4. "What’s your consent workflow for AI-assisted documentation?" — Sharp lawsuit made this mandatory
  5. "How will you comply with state AI disclosure requirements?" — Colorado AI Act preparation

From Attestation to Evidence

The fundamental shift is from trust to verify. Policy documents and attestation letters no longer satisfy sophisticated governance committees. They want:

This is the "evidence gap" that’s stalling healthcare AI procurement. Vendors can describe their controls but can’t prove they ran.

Ambient AI Scribe: The 2025 Flashpoint

Ambient AI clinical documentation is healthcare’s killer app—and its biggest compliance headache. Every major health system is either deploying, piloting, or evaluating ambient scribes. Most are underestimating the liability exposure.

The Consent Challenge

In California and other "all-party consent" states, recording a conversation without all parties’ consent is illegal. Doctor-patient conversations have heightened protection as "confidential communications."

Yet many ambient AI deployments rely on:

The Sharp lawsuit shows where this leads. Explicit, documented, per-encounter consent is becoming the standard.

Best Practice Framework

Ambient AI Consent Checklist

  • ☐ Written consent form specific to AI recording (not general treatment consent)
  • ☐ Consent obtained before recording begins, not retrospectively
  • ☐ Consent captured in EHR with timestamp
  • ☐ Patient can review and request deletion of recordings
  • ☐ Clear disclosure of what AI does with the recording
  • ☐ Opt-out doesn’t affect care quality

Let's Talk at JPM

We help healthcare AI vendors and health systems generate verifiable compliance evidence. Meet with us in San Francisco, January 12–15.

Recommendations for 2026

For Healthcare AI Vendors

  1. Upgrade your BAA — Standard cloud BAAs don’t cover AI-specific risks. Work with counsel to add model training prohibitions, inference logging requirements, and AI incident definitions.
  2. Build evidence infrastructure — Governance committees want proof controls ran. Implement inference-level logging with cryptographic verification.
  3. Prepare for state AI laws — Map your products against Colorado AI Act high-risk categories. Start impact assessments now.
  4. Document consent workflows — For ambient AI, create auditable consent capture that can withstand litigation discovery.

For Health Systems

  1. Audit existing AI deployments — Particularly ambient scribes. Verify consent procedures meet CIPA/CMIA requirements.
  2. Strengthen vendor diligence — Add AI-specific questions to security questionnaires. Require evidence, not just attestations.
  3. Establish AI governance — If you don’t have a formal AI governance committee, create one. If you do, update its charter for 2026 requirements.
  4. Plan for state law compliance — Colorado AI Act affects any AI used for healthcare decisions. Map your exposure.

Related Resources