Using AI with patient data requires careful attention to HIPAA requirements. This guide covers everything healthcare organizations and AI vendors need to know about deploying HIPAA compliant AI tools—from Business Associate Agreements to technical safeguards, common pitfalls, and the emerging regulatory landscape.
What Makes AI "HIPAA Compliant"?
There's no official "HIPAA compliant" certification for AI tools. Instead, HIPAA compliant AI refers to AI systems that are deployed and operated in a manner consistent with HIPAA requirements when handling Protected Health Information (PHI).
For an AI tool to be used with PHI in a HIPAA-compliant manner, several conditions must be met:
- Business Associate Agreement (BAA) — A signed agreement with the AI vendor establishing HIPAA obligations
- Technical safeguards — Encryption, access controls, and audit logging
- Administrative safeguards — Policies, training, and risk assessments
- Physical safeguards — Physical security of systems and data centers
- Breach notification procedures — Documented processes for identifying and reporting breaches
Critical warning: Using consumer AI tools (like the free versions of ChatGPT, Claude, or Gemini) with PHI is a HIPAA violation. These tools are not designed for healthcare use and their terms of service explicitly prohibit inputting sensitive health information.
The Business Associate Agreement (BAA)
The BAA is the cornerstone of HIPAA compliant AI. Under HIPAA, any entity that handles PHI on behalf of a covered entity is a "Business Associate" and must sign a BAA.
A BAA establishes that the vendor:
- Will use appropriate safeguards to protect PHI
- Will report security incidents and breaches
- Will ensure any subcontractors also comply with HIPAA
- Will return or destroy PHI when the relationship ends
- Will make their practices available for compliance audits
Which AI Vendors Offer BAAs?
Major cloud and AI providers that offer BAAs include:
| Provider | BAA Available | Notes |
|---|---|---|
| Microsoft Azure (OpenAI) | Yes | Azure OpenAI Service covered under Microsoft BAA |
| Amazon Web Services | Yes | Amazon Bedrock and other AI services covered |
| Google Cloud | Yes | Vertex AI and healthcare-specific services |
| OpenAI (Direct) | Enterprise only | ChatGPT Enterprise offers BAA; API and Plus do not |
| Anthropic (Claude) | Enterprise only | Available for qualifying enterprise customers |
BAA ≠ Compliance: Having a BAA is necessary but not sufficient. The BAA shifts some liability to the vendor, but the covered entity remains responsible for ensuring the AI is used appropriately and that proper safeguards are in place.
Technical Requirements for HIPAA Compliant AI
The HIPAA Security Rule requires specific technical safeguards. For AI systems handling PHI, these translate to:
1. Encryption
- Data in transit: TLS 1.2 or higher for all API calls and data transmission
- Data at rest: AES-256 encryption for stored PHI, including logs and model outputs
- Key management: Secure key storage and rotation procedures
2. Access Controls
- Unique user identification: Each user must have unique credentials
- Role-based access: Minimum necessary access to PHI
- Automatic logoff: Sessions must timeout after inactivity
- Authentication: Strong authentication, preferably MFA
3. Audit Controls
- Activity logging: Record who accessed what PHI and when
- AI decision logging: Document AI inputs, outputs, and model versions
- Log retention: Maintain logs for minimum 6 years (HIPAA requirement)
- Tamper-proof logs: Ensure logs cannot be modified or deleted
The logging gap: Most AI platforms provide basic access logs but lack the detailed inference-level logging needed for true HIPAA compliance and clinical accountability. Knowing that a user made an API call is different from knowing exactly what PHI was processed and what the AI output was.
Common HIPAA Violations with AI
Healthcare organizations frequently make these mistakes when deploying AI:
1. Using Consumer AI Tools
Staff using ChatGPT, Claude, or other consumer AI tools to summarize patient notes, draft referral letters, or get clinical decision support. This is always a HIPAA violation when PHI is involved.
2. Missing or Incomplete BAAs
Assuming that because a vendor is "healthcare-focused" they automatically have HIPAA coverage. Always verify the BAA exists and covers the specific services you're using.
3. Inadequate Logging
Deploying AI without capturing the audit trail required by HIPAA. If you can't demonstrate what PHI was processed and how, you can't prove compliance.
4. Shadow AI
Departments or individual clinicians deploying AI tools without IT or compliance review. This is increasingly common as AI tools become more accessible.
5. Training Data Issues
Using PHI to fine-tune or train AI models without proper authorization, de-identification, or safeguards.
HIPAA Compliant AI Deployment Checklist
- Signed BAA with AI vendor on file
- AI service explicitly covered in BAA scope
- Encryption in transit (TLS 1.2+) verified
- Encryption at rest (AES-256) confirmed
- Access controls configured (role-based access)
- MFA enabled for all users
- Audit logging enabled and retention configured
- Incident response procedures documented
- Staff training on AI-specific HIPAA requirements completed
- Risk assessment including AI systems completed
- Data flow documentation (where does PHI go?)
- Subcontractor/sub-processor review completed
Architectural Patterns for HIPAA Compliant AI
There are several approaches to using AI with PHI while maintaining HIPAA compliance:
Pattern 1: Enterprise AI with BAA
Use enterprise AI services (Azure OpenAI, AWS Bedrock, Google Vertex AI) with a signed BAA. PHI flows to the AI provider, which is covered under the BAA.
Pros: Simplest to implement, leverages provider security controls
Cons: PHI leaves your environment, dependent on provider compliance
Pattern 2: PHI Redaction Proxy
Deploy a proxy layer that strips PHI before sending data to AI, then re-inserts it in the response. The AI never sees actual PHI.
Pros: Can use any AI provider, PHI never leaves your control
Cons: Complex to implement correctly, may reduce AI quality for some use cases
Learn more in our technical deep-dive: How We Used AI on Patient Data Without a BAA.
Pattern 3: On-Premise AI
Deploy AI models within your own infrastructure. PHI never leaves your environment.
Pros: Maximum control, no BAA required for the model
Cons: Significant infrastructure costs, may not match cloud AI quality
Beyond HIPAA: Emerging AI Regulations
HIPAA covers data privacy and security, but new regulations are emerging that address AI-specific risks:
- Colorado AI Act (June 2026) — Requires documentation and impact assessments for high-risk AI
- EU AI Act (August 2026) — Comprehensive AI regulation classifying healthcare AI as high-risk
- ISO 42001 — International standard for AI management systems
- California ADMT Regulations (January 2027) — Automated decision-making requirements
Organizations should prepare for these overlapping requirements by building comprehensive AI evidence infrastructure that satisfies multiple frameworks.
Need HIPAA-Ready AI Evidence?
Our Evidence Pack Sprint delivers board-ready compliance documentation for healthcare AI vendors—including HIPAA, state regulations, and enterprise procurement requirements.
Book a Sprint CallKey Takeaways
- There's no "HIPAA certified" AI — compliance depends on how the AI is deployed and operated
- BAAs are required when AI vendors handle PHI, but a BAA alone isn't enough
- Consumer AI tools are never HIPAA compliant for use with PHI
- Logging is critical — you need inference-level audit trails, not just access logs
- New regulations are coming — HIPAA is just the floor, not the ceiling
For more on building the evidence infrastructure that supports both HIPAA compliance and emerging AI regulations, read our white paper: The Proof Gap in Healthcare AI.