News & Insights

Client Alert

April 9, 2019

FDA Proposes Regulatory Framework for Artificial Intelligence/Machine Learning Software as a Medical Device

On April 2, 2019, the U.S. Food and Drug Administration (“FDA” or “Agency”) proposed a new regulatory framework to address the development and marketing of artificial intelligence and machine learning-based software as a medical device. FDA’s “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD):  Discussion Paper and Request for Feedback” (“Discussion Paper”) attempts to address the challenging and unique features of AI/ML-based SaMD products, including the capability of such products to continuously learn (i.e. “train”) from real-world feedback, adapt, and modify output when used for diagnostic or therapeutic purposes. In conjunction with the Discussion Paper, FDA also released a new webpage specific to AI/ML, and Commissioner Scott Gottlieb—in one of his last actions as Commissioner—released a statement that outlined the Agency’s confidence in the ability of these technologies to enhance health care.  At the same time, however, Dr. Gottlieb expressed the need to ensure the safety and effectiveness of these devices.

FDA is requesting public feedback on specific questions, which are listed in the Appendix of this Client Alert. Comments must be submitted to the Agency by June 3, 2019.


The release of this proposed regulatory framework signals FDA’s effort to catch up with the explosive development of AI/ML software technologies for potential use in real-world diagnostic and therapeutic applications.  The Discussion Paper clarifies that for AI/ML technologies, FDA will continue to consider software to have a medical purpose if it is intended to treat, diagnose, cure, mitigate, or prevent disease or other conditions, and the Agency will rely on the risk-based framework it applies to other SaMD products to determine whether the initial marketing of the software requires a 510(k) premarket notification, a De Novo application, or a premarket approval application (“PMA”).  Notably, FDA recently authorized two AI/ML-based devices under the De Novo pathway:  a device for detecting diabetic retinopathy and a device for alerting providers of a potential stroke in patients.  In addition, FDA also recently issued a Warning Letter to a firm regarding promotional claims on its website and a press release related to AI/ML-related functionality.  The device was cleared under the 510(k) pathway for use in the review, analysis, and interchange of computed tomography (“CT”) chest images by a medical professional.  In the Warning Letter, however, FDA alleged that the company was promoting the software beyond its cleared indications—as having functionality based on machine learning algorithms with the capability to automatically perform initial clinical review and interpretation of the clinical images and to relieve physicians of the task of reviewing the images.

Proposed Framework for AI/ML Modifications

The Discussion Paper distinguishes AI/ML-based SaMD devices with “locked” algorithms at the time of approval or clearance from similar products with algorithms that are not “locked” at the time of approval or clearance because the device is continuously learning, resulting in adaptive changes in the algorithms and modifications of their output.  To attempt to address AI/ML-based SaMD devices that are “not locked,” the Discussion Paper proposes a potential framework that FDA believes will be capable of addressing these adaptive software changes.  This proposed framework is based on the following:  (1) the International Medical Device Regulators Forum (“IMDRF”), “Software as a Medical Device (SAMD):  Key Definitions” (Dec. 9, 2013); (2) risk management principles in FDA guidance entitled, “Deciding When to Submit a 510(k) for a Software Change to an Existing Device” (Oct. 25, 2017); (3) FDA’s benefit-risk framework; (4) the organization-based total product life cycle (“TPLC”) approach outlined in FDA’s Digital Health Software Precertification (“Pre-Cert”) Program; and (5) current approval pathways, including the 510(k), De Novo, and PMA pathways.

IMDRF’s SaMD Risk-Categorization Framework

As noted, FDA has proposed to use the IMDRF’s risk-categorization framework.  Under this framework, the first factor is the significance of information provided by the SaMD to the healthcare decision, based on the intended use of the information.  Intended uses, for example, may include diagnosing or treating a disease or condition or otherwise informing clinical management.  The second factor focuses on the healthcare situation or condition, which identifies the intended user, the population, and the disease or condition addressed by the SaMD.  Healthcare situations may include, for example, critical, serious, or non-serious healthcare conditions.  The Discussion Paper presents different combinations of the two factors and comments on the resulting levels of risk.  For example, the highest risk is present when the healthcare provider will use information obtained from the SaMD to diagnose or treat patients in healthcare situations that are critical.

FDA emphasizes in the Discussion Paper that, apart from the IMDRF clinical risk framework, there is also a spectrum of risk in AI/ML-based medical devices that ranges from those that employ “locked” algorithms to those with algorithms that continuously adapt based on a defined learning process.  At this time, FDA proposes that the following scheme for assessment of modifications can be applied across the “locked” to “not locked” risk spectrum.

Proposed Scheme for Consideration of AI/ML-Based SaMD Modifications

The Discussion Paper focuses on possible modifications to an AI/ML-based SaMD device and whether a modification to such a device may require pre-market review based on principles in FDA guidance entitled “Deciding When to Submit a 510(k) for a Software Change to an Existing Device.”  FDA anticipates that modifications to these devices will generally fall into three categories that are not mutually exclusive:  (1) clinical and analytical performance; (2) inputs used by the algorithm and their clinical association to the SaMD product’s output; and (3) intended use, which is defined in terms of the importance of the information provided by the SaMD and its relation to the healthcare condition or situation.  FDA describes the potential impact of modifications to SaMD products on users as follows:

  • Modifications related to performance, with no change to the intended use or new input type. Modifications that consist of improvements to analytical and clinical performance can result from a variety of changes, such as changes in the structure of the AI/ML technology or “re-training” of the software with new data sets from the intended use population.  This type of modification typically involves manufacturers updating users on the performance of the SaMD product, without changing any of the explicit use claims about the product (e.g., increased sensitivity of the SaMD to detect breast lesions suspicious for cancer in digital mammograms).
  • Modifications related to inputs, with no change to the intended use. Another type of modification consists of changes to the inputs used by the AI/ML algorithm that do not change the product’s use claims (e.g., SaMD modification to support compatibility with CT scanners from additional manufacturers).
  • Modifications related to the SaMD’s intended use. Changes in the significance of information provided by the SaMD (e.g., from aiding in diagnosis to providing a definitive diagnosis).  This type of modification includes changes in the state of the healthcare situation or condition that is explicitly claimed by the manufacturer, such as an expanded intended patient population (e.g., inclusion of pediatric population when the device was initially intended for adults).

Comments on these proposed categories of modifications will be critical given that the modification categories recognized by FDA will guide the Agency’s regulatory framework and conditions for assessing and allowing modifications to such devices.  FDA’s traditional regulatory frameworks have not previously contemplated the potential benefits and unintended adverse consequences of medical software that continuously adapts.

Total Product Lifecycle (“TPLC”) Regulatory Approach

FDA recognizes that these technologies are autonomous and adaptive and therefore require a new TPLC regulatory approach that allows for rapid and continuous changes in product improvement, while providing effective safeguards.  The Discussion Paper clarifies that FDA proposes to use the approach described in the Software Pre-Cert Program for AI/ML-based SaMD products, through which FDA will assess the culture of quality and organizational excellence of a company and then evaluate the effectiveness of its software development, testing, and performance monitoring processes.  This TPLC approach would be applied only to those AI/ML-based SaMD products that require a premarket submission (i.e., 510(k), De Novo, or PMA).

FDA stated that the Agency’s application of the TPLC approach to an AI/ML-based SaMD product would be based on four key principles:

  1. Good Machine Learning Practices. Every medical device manufacturer is expected to have an established quality system and follow good machine learning practices (“GMLPs”) to support the development, delivery, and maintenance of high-quality products throughout the lifecycle.  Devices that rely on AI/ML are expected to demonstrate analytical and clinical validation, as described in FDA guidance.
  2. Pre-Specifications and Algorithm Change Protocol. The framework provides manufacturers the option to submit a “predetermined change control plan” during the initial premarket review that includes SaMD Pre-Specifications (“SPS”), which are the types of anticipated modifications the manufacturer plans to achieve when the SaMD is in use based on the retraining of the algorithms and model update strategy.  The control plan also includes the Algorithm Change Protocol (“ACP”), which is the associated methodology that is used to implement those changes in a controlled manner, such that the modification achieves its goals and the device remains safe and effective after the modification.
  3. Algorithm Adaption. Depending on the type of learning, adaptation, or optimization modification to the AI/ML-based SaMD product, FDA would require either (1) a new premarket submission, or (2) records of the modification in the change history and other documents, such as the file for reference.  This approach would be like that outlined in FDA’s guidance entitled “Deciding When to Submit a 510(k) for a Software Change to an Existing Device.”
  4. Transparency and Performance Monitoring. FDA also proposes that manufacturers implement appropriate mechanisms that provide transparency about the function and modifications of medical devices.  The Agency also anticipates the use of real-world data collection and monitoring to mitigate the risk involved with AI/ML-based SaMD modifications.  Transparency and monitoring would entail updates or reports to FDA, other device companies or collaborators of the manufacturer, and clinicians, patients, users, and other members of the general public.  Additionally, real-world performance monitoring may be achieved and/or reported using a variety of mechanisms that are currently employed or under pilot testing at FDA, such as 510(k) add-to-files, PMA annual reports, or the Case for Quality or Pre-Cert Program pilots, including the latter pilot’s real-world performance analytics data collection, analysis, and/or submission components.

Key Takeaways

FDA’s release of the Discussion Paper is a major step in developing the Agency’s proposed regulatory strategy for AI/ML-based SaMD products.  The proposed framework attempts to address the novel postmarket challenges of AI/MI-based software algorithms that are continually learning, adapting, and modifying output that is used for diagnosis or delivery of therapy for patients.  Comments on the proposed framework—from manufacturers and healthcare facilities in particular—will be critical to ensure that FDA strikes the right balance in enabling rapid access to innovation with appropriate safeguards to provide reasonable assurance of its safety and effectiveness.  King & Spalding would be happy to assist you in drafting and submitting comments to FDA regarding this Discussion Paper.  We will continue to monitor the trends in this area and keep you updated.



FDA Discussion Paper Questions and Feedback Request

  1. Questions/Feedback on the Types of AI/ML-based SaMD Product Modifications:
  • Do these categories of AI/ML-based SaMD modifications align with the modifications that would typically be encountered in software development that could require premarket submission?
  • What additional categories, if any, of AI/ML-based SaMD modifications should be considered in this proposed approach?
  • Would the proposed framework for addressing modifications and modification types assist the development AI/ML software?
  1. Questions/Feedback on GMLPs:
  • What additional considerations exist for GMLPs?
  • How can FDA support development of GMLPs?
  • How do manufacturers and software developers incorporate GMLPs in their organizations?
  • Questions/Feedback on SPS and ACP:
  • What are the appropriate elements for the SPS?
  • What are the appropriate elements for the ACP to support the SPS?
  • What potential formats do you suggest for appropriately describing a SPS and an ACP in the premarket review submission or application?
  1. Questions/Feedback on Premarket Review:
  • How should FDA handle changes outside of the “agreed upon SPS and ACP”?
  • What additional mechanisms could achieve a “focused review” of an SPS and ACP?
  • What content should be included in a “focused review”?
  1. Questions/Feedback on the Transparency and Real-World Performance Monitoring:
  • In what ways can a manufacturer demonstrate transparency about AI/ML-based SaMD algorithm updates, performance improvements, or labeling changes, to name a few?
  • What role can real-world evidence play in supporting transparency for AI/ML-based SaMD products?
  • What additional mechanisms exist for real-world performance monitoring of AI/ML-based SaMD products?
  • What additional mechanisms might be needed for real-world performance monitoring of AI/ML-based SaMD products?
  1. Questions/Feedback on the ACP:
  • Are there additional components for inclusion in the ACP that should be specified?
  • What additional level of detail would you add for the described components of an ACP?