Executive Summary
Healthcare AI enters 2026 facing a convergence of regulatory deadlines, precedent-setting litigation, and increasingly sophisticated governance committee scrutiny. This briefing covers what matters most for JPM conversations:
- State AI laws — Colorado AI Act takes effect February 2026; organizations need to start now
- Consent litigation — Sharp HealthCare lawsuit creates ambient AI scribe liability template
- HIPAA gaps — Standard BAAs don’t cover AI-specific risks; model training exposure
- Governance shift — Committees now asking for evidence, not just attestations
2025–2026 Regulatory Timeline
Healthcare AI faces a compressed compliance window. Here’s what’s coming and when:
FDA expected to finalize guidance on AI/ML-enabled medical devices, including continuous learning systems.
First major state AI law. Requires risk assessments, impact statements, and consumer disclosures for "high-risk AI systems" including healthcare decisions. (Delayed from February 2026 per SB 25B-004.)
High-risk AI system requirements fully applicable. Any US company with EU patients or customers must comply.
Connecticut, Texas, Illinois, and others expected to follow Colorado’s model. Patchwork compliance challenge emerging.
Key Insight for JPM
Organizations typically need 6–12 months to implement AI governance programs that meet Colorado AI Act requirements. With Colorado’s June 2026 deadline approaching, organizations that haven’t started face significant execution risk.
The Consent Litigation Wave
Ambient AI scribes have become one of healthcare’s most widely-adopted AI categories. They also became a significant liability exposure in 2025.
Sharp HealthCare: The Template Case
Filed November 2025, Saucedo v. Sharp HealthCare alleges that since Abridge’s deployment in April 2025, over 100,000 patient encounters may have been recorded without adequate consent. The complaint includes several allegations that create templates for future litigation:
- Auto-inserted consent statements — The AI allegedly inserted false attestations into medical records claiming patients had been "advised of and consented to" recording when they had not
- CIPA violations — California’s Invasion of Privacy Act allows $5,000 per violation (or three times actual damages) for wiretapping
- CMIA violations — Confidentiality of Medical Information Act claims for unauthorized disclosure
- Negligence claims — Failure to implement adequate consent procedures
Potential Liability Exposure
If courts count each recorded patient encounter as a separate CIPA violation, the $5,000 statutory penalty could result in exposure exceeding $500 million based on plaintiff’s estimates. Even at a fraction of this figure, exposure would dwarf typical HIPAA penalties.
The "Capability Test" Expansion
Perhaps more concerning is the emerging "capability test" from Ambriz v. Google LLC (2025). The court held that an AI vendor can be liable as a third-party eavesdropper if it merely possesses the technical capability to use intercepted data for its own purposes—regardless of whether it actually exercises that capability.
Most AI vendors’ terms of service reserve rights to use customer data for model training. Under the capability test, this reservation alone may create CIPA liability even if training never occurs.
HIPAA’s AI Blind Spots
HIPAA was written for fax machines and filing cabinets. While its principles apply to AI, significant gaps exist:
BAA Coverage Gaps
| Risk Area | Standard Cloud BAA | AI-Specific BAA |
|---|---|---|
| Model training on PHI | Not addressed | Explicit prohibition or consent requirement |
| Inference logging | Basic access logs only | Full input/output audit trail |
| Subprocessor AI models | Generic subprocessor clause | Named models, version control |
| Hallucination liability | Not addressed | Accuracy disclaimers, liability allocation |
| Breach definition | Standard PHI breach | Includes AI-specific incidents (bias, manipulation) |
The Audit Trail Problem
HIPAA’s Security Rule requires audit controls for PHI access. For AI systems, this means logging not just who accessed data, but what the AI did with it:
- What PHI was sent to the model
- What the model output was
- What version of the model was used
- What safety controls executed
- Whether output was modified before display
Most AI platforms provide application-level access logs. They don’t provide inference-level audit trails. This creates a compliance gap that governance committees are increasingly identifying.
What Governance Committees Are Asking
Health system AI governance committees have evolved rapidly. In 2023, they asked if you had policies. In 2024, they asked about your certifications. In 2025, they’re asking for evidence.
The New Questions
Based on conversations with health system CISOs, CMIOs, and compliance officers, here are the questions that now regularly appear in procurement reviews:
- "Can you demonstrate that PHI was not used to train your model?" — Not a policy statement; actual technical evidence
- "What happens if your AI hallucinates clinical information?" — Looking for incident response, not just disclaimers
- "How do you prove content filtering ran on a specific patient interaction?" — The execution evidence question
- "What’s your consent workflow for AI-assisted documentation?" — Sharp lawsuit made this mandatory
- "How will you comply with state AI disclosure requirements?" — Colorado AI Act preparation
From Attestation to Evidence
The fundamental shift is from trust to verify. Policy documents and attestation letters no longer satisfy sophisticated governance committees. They want:
- Cryptographic proof that controls executed on specific interactions
- Audit trails that can be independently verified
- Real-time dashboards showing compliance status, not annual reports
- Incident evidence that can be produced within hours, not weeks
This is the "evidence gap" that’s stalling healthcare AI procurement. Vendors can describe their controls but can’t prove they ran.
Ambient AI Scribe: The 2025 Flashpoint
Ambient AI clinical documentation is healthcare’s killer app—and its biggest compliance headache. Every major health system is either deploying, piloting, or evaluating ambient scribes. Most are underestimating the liability exposure.
The Consent Challenge
In California and other "all-party consent" states, recording a conversation without all parties’ consent is illegal. Doctor-patient conversations have heightened protection as "confidential communications."
Yet many ambient AI deployments rely on:
- Implicit consent — "The patient didn’t object when I mentioned we use AI"
- General consent forms — Buried in admission paperwork signed weeks earlier
- Verbal mention — Not documented, hard to prove in litigation
The Sharp lawsuit shows where this leads. Explicit, documented, per-encounter consent is becoming the standard.
Best Practice Framework
Ambient AI Consent Checklist
- ☐ Written consent form specific to AI recording (not general treatment consent)
- ☐ Consent obtained before recording begins, not retrospectively
- ☐ Consent captured in EHR with timestamp
- ☐ Patient can review and request deletion of recordings
- ☐ Clear disclosure of what AI does with the recording
- ☐ Opt-out doesn’t affect care quality
Let's Talk at JPM
We help healthcare AI vendors and health systems generate verifiable compliance evidence. Meet with us in San Francisco, January 12–15.
Recommendations for 2026
For Healthcare AI Vendors
- Upgrade your BAA — Standard cloud BAAs don’t cover AI-specific risks. Work with counsel to add model training prohibitions, inference logging requirements, and AI incident definitions.
- Build evidence infrastructure — Governance committees want proof controls ran. Implement inference-level logging with cryptographic verification.
- Prepare for state AI laws — Map your products against Colorado AI Act high-risk categories. Start impact assessments now.
- Document consent workflows — For ambient AI, create auditable consent capture that can withstand litigation discovery.
For Health Systems
- Audit existing AI deployments — Particularly ambient scribes. Verify consent procedures meet CIPA/CMIA requirements.
- Strengthen vendor diligence — Add AI-specific questions to security questionnaires. Require evidence, not just attestations.
- Establish AI governance — If you don’t have a formal AI governance committee, create one. If you do, update its charter for 2026 requirements.
- Plan for state law compliance — Colorado AI Act affects any AI used for healthcare decisions. Map your exposure.
Related Resources
Healthcare AI Readiness Assessment
8-question diagnostic covering consent, BAAs, audit trails, governance, and state law exposure.
HIPAA Compliant AI Guide
Comprehensive guide to BAA requirements, PHI handling, and Security Rule compliance for AI systems.
Ambient AI Scribe Privacy Guide
Deep dive on consent requirements, CIPA liability, and the Sharp HealthCare lawsuit implications.
Colorado AI Act Guide
What the first major state AI law means for healthcare. High-risk categories, requirements, and timelines.
The Proof Gap Whitepaper
Why healthcare AI compliance claims fail without runtime evidence, and what to do about it.
Healthcare AI Industry Overview
Complete landscape of healthcare AI adoption, risks, and governance requirements.