Jump To Top

groundrushairsports

FDA action plan puts focus on AI-enabled software as medical device

The U.S. Food and Drug Administration this week published its first action plan for how it intends to spur development and oversight of safe, patient-centric artificial intelligence and machine learning-based software as a medical device.

WHY IT MATTERS
The AI/ML-Based Software as a Medical Device Action Plan is a project of the Digital Health Center of Excellence at FDA’s Center for Devices and Radiological Health, which launched this past September.

The action plan outlines five next steps FDA intends to take as AI/ML-based SaMD continues to evolve:

  • Continuing to develop its own proposed regulatory framework for draft guidance on a predetermined change control plan for software learning
  • Supporting “good machine learning practices” for evaluation of ML algorithms;
  • Enabling a more transparent patient-centered approach
  • Developing new methods to evaluate and improve machine learning algorithms; and 
  • Creating new pilots to enable real-world performance monitoring

As for a more tailored regulatory approach, FDA says it will update the proposed framework for AI/ML-based SaMD, “including through issuance of Draft Guidance on the Predetermined Change Control Plan.” The guidance will cover elements needed to support the safety and efficacy of SaMD algorithms, officials said, noting that the “goal is to publish this draft guidance in 2021.”

When it comes to FDA’s concept of Good Machine Learning Practice, or GMLP, it plans to focus on “AI/ML best practices (e.g., data management, feature extraction, training, interpretability, evaluation and documentation) that are akin to good software engineering practices or quality system practices. Development and adoption of these practices is important not only for guiding the industry and product development, but also for facilitating oversight of these complex products, through manufacturer’s adherence to well established best practices and/or standards.”

FDA says the next step toward a more transparent and patient-centered approach will be to “hold a public workshop on how device labeling supports transparency to users and enhances trust in AI/ML-based devices. The Agency acknowledges that AI/ML-based devices have unique considerations that necessitate a proactive patient-centered approach to their development and utilization that takes into account issues including usability, equity, trust, and accountability.” The agency notes that “promoting transparency is a key aspect of a patient-centered approach, and we believe this is especially important for AI/ML-based medical devices, which may learn and change over time, and which may incorporate algorithms exhibiting a degree of opacity.”

“This action plan outlines … a holistic approach based on total product lifecycle oversight to further the enormous potential that these technologies have to improve patient care while delivering safe and effective software functionality that improves the quality of care that patients receive.”

Bakul Patel, FDA Center for Devices and Radiological Health

Toward new methods to address algorithmic bias, FDA says it will “support regulatory science efforts to develop methodology for the evaluation and improvement of machine learning algorithms, including for the identification and elimination of bias, and for the evaluation and promotion of algorithm robustness. Bias and generalizability is not an issue exclusive to AI/ML-based devices. Given the opacity of the functioning of many AI/ML algorithms, as well as the outsized role we expect these devices to play in health care, it is especially important to carefully consider these issues for AI/ML-based products.”

And with regard to real-world performance, or RWP, the agency says that “gathering performance data on the real-world use of the SaMD may allow manufacturers to understand how their products are being used, identify opportunities for improvements, and respond proactively to safety or usability concerns. Real-world data collection and monitoring is an important mechanism that manufacturers can leverage to mitigate the risk involved with AI/ML-based SaMD modifications, in support of the benefit-risk profile in the assessment of a particular marketing submission.”

Read the entire action plan here. FDA says it wants more feedback, and will continue to work with various stakeholders and collaborate across the agency to craft a more coordinated approach.

THE LARGER TREND
The AI/ML-Based Software as a Medical Device Action Plan is a response to stakeholder feedback received from the April 2019 discussion paper, Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning-Based Software as a Medical Device.

“This action plan outlines the FDA’s next steps towards furthering oversight for AI/ML-based SaMD,” said Bakul Patel, director of the CDRH Digital Health Center of Excellence.

He said it’s meant to describe “a holistic approach based on total product lifecycle oversight to further the enormous potential that these technologies have to improve patient care while delivering safe and effective software functionality that improves the quality of care that patients receive.

“To stay current and address patient safety and improve access to these promising technologies, we anticipate that this action plan will continue to evolve over time,” he added.

Speaking this past October at the Center of Excellence’s Patient Engagement Advisory Committee, Patel said AI is at a pivotal moment where “software can take inputs from many, many, many sources and generate those intentions for diagnosing, treating.

“As we start getting into the world of machine learning and using data to program software and program technology, we are seeing this advent, and the fluidity and the availability of the data, becoming a big driver,” he said. “And that comes with opportunities – and that comes with some challenges as well.”

And, as we’ve shown, it’s important for hospitals and health systems to work closely, where possible, with vendors and manufacturers to help ensure safe and effective machine learning algorithms.

ON THE RECORD
Bradley Merrill Thompson of the law firm Epstein Becker & Green – where he counsels on medical devices, FDA regulatory issues and more – offered his thoughts on the action plan to HITN’s sister publication, MobiHealthNews. He says the new report is “good and bad news.”

The good news? “Many people in industry support the general approach – with one exception I’ll describe,” he said.

The bad news? “This appears to be a report in lieu of progress,” said Thompson. “What’s depressing is that the concept paper was published in April 2019, and here we are coming up on two years, and for the most part, on the critical elements like a guidance on the Predetermined Change Control Plan, they’re merely talking about that guidance in the future tense with the goal of publishing it in 2021.”

Thompson notes that FDA doesn’t even specify “fiscal year 2021” – which he suspects means the wait might be more toward the end of the calendar year. “We were really hoping for quicker action as we think that guidance is critically important to the further development of artificial intelligence in healthcare.”

As for industry reaction? Many stakeholders support the first four steps outlined in the action plan, he said.

“Developing a guidance document to implement the Predetermined Change Control Plan is extremely important, although I know that many AI developers are currently informally trying to work with FDA on a case-by-case basis to develop such plans,” he explained.

“The GMLP is likewise a very affirmative step, and I do appreciate the fact that they want to work in a consensus fashion with lots of standard-setting bodies,” said Thompson. “Having transparency on the required transparency is also extremely important. A workshop would be a very constructive next step, as the agency proposes. Transparency is a technically complex topic, but also a practically challenging idea given the possible audiences for the information.”

Likewise, “the regulatory science initiative is very important, as we need better and more specialized tools to identify bias and performance in AI used in healthcare. We also need to be able to identify the appropriate standards, such as how much bias is acceptable. There will always be some bias.

Thompson said the biggest substantive disagreement from developers and device makers would be around discussion of real-world performance.

“On the one hand, I think many in industry support the idea that companies need to develop systems to monitor the performance of their algorithms in the marketplace,” he said. “Performance changes, by the very nature of artificial intelligence, and companies must develop robust systems to monitor those changes and ensure that their products remain safe and effective.

“The point of departure is that we sense that FDA wants to be in the middle of that, getting frequent updates of data so that the agency can on a more real-time basis monitor that performance,” he added.

For most on this industry side, that’s “completely unacceptable,” said Thompson. “And the reason FDA proposes to proceed on a volunteer basis is that they have no statutory authority to require this.” That’s why he expects significant “disagreement with FDA over what data need to be shared and when during the post-market phase of AI based product lifecycles.”

MobiHealthNews Associate Editor Dave Muoio contributed to this story.

Twitter: @MikeMiliardHITN
Email the writer: [email protected]

Healthcare IT News is a HIMSS publication.

Source: Read Full Article

  • Posted on January 14, 2021