FDA Regulatory Framework for AI/ML Devices
The U.S. Food and Drug Administration (FDA) regulates AI/ML-enabled medical devices under the Federal Food, Drug, and Cosmetic Act (FD&C Act). Unlike traditional software that follows deterministic logic, AI/ML algorithms can learn from data and evolve—presenting unique regulatory challenges that FDA has addressed through a series of guidance documents and policy frameworks.[1][2]
Evolution of FDA’s AI/ML Approach
FDA’s journey toward comprehensive AI/ML regulation began with the 2019 discussion paper "Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning-Based Software as a Medical Device." This established the conceptual foundation for what would become the Total Product Lifecycle (TPLC) approach—a regulatory model that acknowledges AI/ML devices may change over time while maintaining safety and effectiveness.[6]
Key milestones in FDA’s AI/ML regulatory evolution:
- 2019: Proposed regulatory framework discussion paper establishing TPLC concept
- 2021: AI/ML-Based SaMD Action Plan outlining five priority areas
- 2021: Good Machine Learning Practice (GMLP) guiding principles with Health Canada and UK MHRA
- 2023: Draft PCCP guidance for public comment
- December 2024: Final PCCP guidance issued
- January 2025: AI-enabled device software lifecycle guidance (DRAFT)
- August 2025: Full PCCP implementation
Scope of FDA Oversight
FDA regulates AI/ML-enabled devices that meet the definition of a "device" under the FD&C Act—intended for use in diagnosis, cure, mitigation, treatment, or prevention of disease. This includes:
- Software as a Medical Device (SaMD)—standalone software performing medical functions
- Software in a Medical Device (SiMD)—software integral to a hardware device
- AI-enabled diagnostic systems—imaging analysis, pathology, radiology
- Clinical decision support—systems that inform, but don’t replace, clinical judgment
Software as a Medical Device (SaMD) Classification
FDA follows the International Medical Device Regulators Forum (IMDRF) framework for classifying SaMD based on two factors: the significance of the information provided by the software and the state of the healthcare situation or condition.[7]
IMDRF Risk Categorization Matrix
SaMD Risk Categories
| State of Healthcare Situation | Treat or Diagnose | Drive Clinical Management | Inform Clinical Management |
|---|---|---|---|
| Critical | IV (Highest) | III | II |
| Serious | III | II | I (Lowest) |
| Non-Serious | II | I | I |
Approximately 75% of FDA-authorized AI/ML medical devices are classified as Class II, requiring 510(k) premarket notification. Higher-risk Class III devices require Premarket Approval (PMA) with more extensive clinical evidence.[4]
Device Classification Examples
Class I (Low Risk)
- • General wellness applications
- • Administrative workflow tools
- • Non-clinical decision support
Generally exempt from premarket submission
Class II (Moderate Risk)
- • Radiology AI (CADe/CADx)
- • ECG analysis software
- • Diabetic retinopathy screening
510(k) or De Novo pathway required
Class III (High Risk)
- • Autonomous diagnostic systems
- • AI-driven treatment recommendations
- • Life-sustaining device algorithms
PMA with clinical trials required
Clinical Decision Support (CDS) Exemptions
- • Basis for recommendation is transparent
- • Provider can independently review basis
- • Not intended to replace clinical judgment
May be exempt per 21st Century Cures Act
Predetermined Change Control Plans (PCCP)
The Predetermined Change Control Plan (PCCP) represents FDA’s most significant innovation in AI/ML device regulation. Finalized in December 2024 with full implementation by August 2025, PCCPs enable manufacturers to pre-authorize certain algorithm modifications without requiring new marketing submissions—addressing the fundamental challenge that AI/ML systems are designed to learn and improve.[1]
PCCP Requirements
A PCCP submitted as part of a marketing application must include:
PCCP Core Components
| Component | Description | FDA Expectation |
|---|---|---|
| Description of Modifications | Specific types of changes that may be made to the device | Clear boundaries on what changes are pre-authorized |
| Modification Protocol | Methodology for developing and implementing changes | Documented development process with quality controls |
| Impact Assessment | Methods to evaluate the effect of modifications on safety/effectiveness | Quantitative metrics and acceptance criteria |
| Verification and Validation | Testing protocols for each type of modification | Evidence that changes meet performance specifications |
| Traceability | Documentation linking modifications to assessments | Complete audit trail for all changes made under PCCP |
Modifications Requiring New Submissions
Even with an approved PCCP, certain modifications fall outside its scope and require new marketing submissions:
- Changes to intended use—expanding indications, new patient populations
- New input types—different imaging modalities, data sources
- Fundamental algorithm changes—new ML architectures, training approaches
- Performance degradation—changes that reduce safety or effectiveness
- New risks—modifications introducing hazards not previously addressed
PCCP Documentation Best Practices
Organizations implementing PCCPs should maintain:
- Version control—immutable records of all model versions with deployment timestamps
- Validation datasets—curated test sets used to verify each modification
- Performance baselines—documented metrics against which changes are evaluated
- Decision logs—rationale for determining whether changes fall within PCCP scope
Submission Pathways for AI/ML Devices
AI/ML medical devices can be authorized through three primary regulatory pathways, each with distinct requirements and timelines.[5]
510(k) Premarket Notification
The 510(k) pathway is used when a device is "substantially equivalent" to a legally marketed predicate device. This is the most common pathway for AI/ML devices, accounting for approximately 75% of authorizations.
510(k) Key Requirements
- • Identify legally marketed predicate device
- • Demonstrate same intended use and similar technological characteristics
- • Performance testing comparing device to predicate
- • Software documentation per FDA guidance
Timeline: 90-day FDA review (actual: 3-6 months typical)
De Novo Classification
The De Novo pathway is for novel, low-to-moderate risk devices without a predicate. Many innovative AI/ML devices use this pathway when no substantially equivalent device exists.
De Novo Key Requirements
- • Demonstrate device is low-to-moderate risk
- • General and special controls adequate to provide reasonable assurance of safety and effectiveness
- • Propose device classification and product code
- • Clinical or analytical validation data
Timeline: 150-day FDA review (actual: 6-12 months typical)
Premarket Approval (PMA)
PMA is required for Class III devices that pose the highest risk. This pathway requires the most extensive clinical evidence and FDA review.
PMA Key Requirements
- • Valid scientific evidence demonstrating safety and effectiveness
- • Clinical trials typically required
- • Manufacturing quality systems inspection
- • Post-market surveillance commitments
Timeline: 180-day FDA review (actual: 12-24 months typical)
Total Product Lifecycle (TPLC) Approach
FDA’s Total Product Lifecycle approach recognizes that AI/ML devices are fundamentally different from traditional medical devices—they’re designed to learn, adapt, and improve. The January 2025 AI-enabled device software lifecycle draft guidance proposes expectations for managing these devices throughout their entire lifecycle.[2]
TPLC Core Principles
Good Machine Learning Practice (GMLP)
FDA, Health Canada, and UK MHRA jointly published 10 guiding principles for GMLP covering data quality, model design, performance evaluation, and ongoing monitoring. These principles form the foundation of lifecycle management expectations.
Algorithm Change Protocol
Manufacturers must establish protocols for how algorithm changes are developed, validated, and deployed. This includes defining what constitutes a "significant" change requiring new submission versus a change manageable under a PCCP.
Performance Monitoring Strategy
Continuous monitoring of real-world performance is expected, including detection of performance drift, monitoring across subpopulations, and processes for addressing performance degradation.
Re-Training Protocols
For adaptive algorithms, manufacturers must document how re-training decisions are made, what data is used, how validation is performed, and how updates are deployed while maintaining device safety.
Locked vs. Adaptive Algorithms
FDA distinguishes between two fundamental types of AI/ML algorithms:
Locked Algorithms
Algorithm produces same result each time same input is applied. Does not change after deployment.
- → Traditional regulatory pathway applies
- → Changes require new submission or PCCP
- → Simpler post-market monitoring
Adaptive Algorithms
Algorithm changes its behavior over time based on new data or learning from deployed use.
- → PCCP strongly recommended
- → Continuous monitoring required
- → Re-training protocols must be documented
Cybersecurity Requirements for AI Devices
FDA’s 2023 guidance on Cybersecurity in Medical Devices applies to all software-based devices, with specific considerations for AI/ML systems. The guidance requires manufacturers to demonstrate cryptographic protections, integrity controls, and secure update mechanisms in premarket submissions.[3]
Core Cybersecurity Requirements
Cybersecurity Submission Elements
| Requirement | Description | AI-Specific Considerations |
|---|---|---|
| Threat Modeling | Identification and analysis of potential cybersecurity threats | Adversarial inputs, model poisoning, data manipulation |
| Security Risk Assessment | Evaluation of exploitability and severity of identified threats | Model extraction, inference attacks, training data leakage |
| Security Controls | Technical measures to mitigate identified risks | Input validation, anomaly detection, model integrity checks |
| Software Bill of Materials | Inventory of all software components including ML libraries | Model dependencies, training frameworks, inference engines |
| Vulnerability Management | Processes for identifying and addressing vulnerabilities | Model vulnerability scanning, adversarial testing |
AI-Specific Security Concerns
AI/ML devices face unique cybersecurity threats beyond traditional software:
- Adversarial attacks—carefully crafted inputs designed to cause misclassification
- Data poisoning—manipulation of training data to introduce vulnerabilities
- Model extraction—unauthorized copying of proprietary algorithms
- Membership inference—attacks revealing training data contents
Real-World Performance Monitoring
FDA increasingly emphasizes the importance of monitoring AI/ML device performance in real-world clinical settings. The September 2025 request for public comment on AI device performance underscores the agency’s focus on developing standardized approaches to real-world evidence collection.[8]
Monitoring Expectations
Manufacturers should implement monitoring systems that track:
- Algorithm accuracy—ongoing performance metrics compared to validation benchmarks
- Performance drift—detection of degradation over time or across changing data distributions
- Subpopulation performance—monitoring across demographic groups, clinical sites, and use cases
- Unexpected outputs—logging and analysis of edge cases and anomalies
- User feedback—mechanisms to capture clinician reports of errors or concerns
Evidence Infrastructure Requirements
Effective real-world monitoring requires infrastructure capable of:
Per-Inference Logging
Capture detailed records of each AI inference including inputs, outputs, model version, and any guardrails that executed. This enables root cause analysis when issues arise.
Immutable Audit Trails
Logs should be tamper-evident and independently verifiable. Regulatory submissions may require demonstrating that records haven’t been altered.
Automated Alerting
Systems should automatically detect performance degradation and alert appropriate personnel when metrics fall outside acceptable ranges.
Reporting Capabilities
Generate reports suitable for regulatory submissions, including MDR (Medical Device Report) documentation when adverse events occur.
Frequently Asked Questions
What is a Predetermined Change Control Plan (PCCP)?
A PCCP is an FDA-authorized framework that enables AI/ML device manufacturers to make certain pre-specified modifications to their devices without requiring new marketing submissions. The PCCP must describe the specific modifications, the methodology for implementing changes, and the assessment protocols to ensure ongoing safety and effectiveness. Final guidance was issued December 2024 with full implementation by August 2025.
What are the FDA submission pathways for AI/ML medical devices?
AI/ML medical devices can be cleared or approved through three main pathways: 510(k) Premarket Notification for devices substantially equivalent to predicate devices, De Novo Classification for novel low-to-moderate risk devices without predicates, and Premarket Approval (PMA) for high-risk Class III devices. The pathway depends on device classification, risk level, and whether a suitable predicate exists.
What is SaMD and how does FDA regulate it?
Software as a Medical Device (SaMD) is software intended to be used for medical purposes without being part of a hardware medical device. FDA regulates SaMD based on the International Medical Device Regulators Forum (IMDRF) risk categorization framework, considering both the significance of the information provided and the healthcare situation or condition. AI/ML-enabled SaMD faces additional requirements for algorithm transparency, performance monitoring, and change management.
What cybersecurity requirements apply to AI medical devices?
FDA requires AI/ML medical devices to demonstrate cryptographic protections, integrity controls, secure update strategies, and evidence of cybersecurity measures in premarket submissions. The 2023 Cybersecurity in Medical Devices guidance mandates threat modeling, security risk assessment, vulnerability management, and incident response capabilities. AI-specific concerns include model integrity, adversarial attack resistance, and secure model update mechanisms.
What is the Total Product Lifecycle (TPLC) approach for AI devices?
The Total Product Lifecycle (TPLC) approach is FDA’s framework for regulating AI/ML devices throughout their entire lifecycle—from development through post-market surveillance. It emphasizes continuous learning, real-world performance monitoring, and iterative improvement while maintaining safety and effectiveness. The TPLC approach supports the Good Machine Learning Practice (GMLP) principles and enables PCCPs for controlled algorithm updates.
How does FDA define locked vs. adaptive AI algorithms?
Locked algorithms produce the same output each time the same input is applied—they don’t change after deployment. Adaptive algorithms can learn and evolve over time based on new data. FDA requires different regulatory approaches: locked algorithms follow traditional device pathways, while adaptive algorithms require PCCPs or new submissions when modifications occur. The January 2025 draft lifecycle guidance proposes expectations for both types.
What real-world performance monitoring does FDA expect for AI devices?
FDA expects manufacturers to monitor AI/ML device performance in real-world clinical settings through post-market surveillance. This includes tracking algorithm accuracy, detecting performance drift, monitoring for unexpected outputs, and assessing performance across different patient populations. The September 2025 request for public comment signals FDA’s focus on standardized approaches to real-world evidence collection for AI devices.
When do AI device changes require new FDA submissions?
AI device changes require new FDA submissions when they: (1) affect safety or effectiveness beyond what was originally authorized, (2) fall outside an approved PCCP, (3) change the intended use, or (4) introduce new risks. Minor changes within an approved PCCP may proceed without new submissions, but manufacturers must document all modifications and maintain evidence that changes meet PCCP criteria.
References
- [1] U.S. Food and Drug Administration. "Predetermined Change Control Plans for Machine Learning-Enabled Device Software Functions: Guidance for Industry and FDA Staff." December 2024/August 2025. fda.gov
- [2] U.S. Food and Drug Administration. "Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations." Draft Guidance, January 6, 2025. fda.gov
- [3] U.S. Food and Drug Administration. "Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions." September 2023. fda.gov
- [4] U.S. Food and Drug Administration. "Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices." Updated 2025. fda.gov
- [5] U.S. Food and Drug Administration. "Premarket Submission for Device Software Functions." November 2021. fda.gov
- [6] U.S. Food and Drug Administration. "Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning-Based Software as a Medical Device." Discussion Paper, 2019. fda.gov
- [7] International Medical Device Regulators Forum (IMDRF). "Software as a Medical Device (SaMD): Possible Framework for Risk Categorization and Corresponding Considerations." 2014. imdrf.org
- [8] U.S. Food and Drug Administration. "Request for Public Comment on AI Device Performance." September 30, 2025. fda.gov
- [9] FDA, Health Canada, UK MHRA. "Good Machine Learning Practice for Medical Device Development: Guiding Principles." October 2021. fda.gov
- [10] U.S. Food and Drug Administration. "Digital Health Center of Excellence." fda.gov