AI Risk Management Framework by NIST: In Relationship with the Pharmaceuticals and Medical Devices Industry

Introduction

The integration of Artificial Intelligence (AI) and Machine Learning (ML) in pharmaceuticals and medical devices holds immense potential for transforming healthcare. NIST and the FDA have developed frameworks and guidelines to manage the associated risks and maximize benefits. A convergence and complimentary nature of these frameworks will go a long way in creating a harmonized standard - this article tries to do that.

NIST’s AI Risk Management Framework (AI RMF)

Overview

NIST’s AI RMF provides a structured and flexible approach to managing AI risks. Designed to adapt across various industries, including pharmaceuticals and medical devices, the framework emphasizes governance, measurement, and continuous risk management.

Key Components of the AI RMF

  1. Govern: Establishes foundational roles and oversight mechanisms to ensure accountability and transparency. The governance component focuses on defining clear roles, responsibilities, and accountability structures within organizations to oversee AI initiatives. It includes establishing policies and procedures to ensure ethical AI deployment and compliance with relevant regulations.
  2. Map: Identifies and understands the context, potential impacts, and scope of AI applications. Mapping involves identifying the specific context in which the AI system operates, understanding the potential impacts on stakeholders, and outlining the scope of the AI application. This step is crucial for anticipating how AI might affect different aspects of healthcare delivery.
  3. Measure: Evaluates AI systems' performance, including accuracy, reliability, and potential biases. This component focuses on evaluating the AI system's performance metrics, such as accuracy, reliability, and bias. It involves continuous monitoring and validation to ensure the AI operates as intended and mitigates any negative impacts.
  4. Manage: Implements strategies to mitigate identified risks, ensuring AI systems operate within acceptable bounds. Managing AI risks involves implementing mitigation strategies for identified risks and continuously updating practices based on new data and insights. This includes maintaining robust documentation and audit trails for regulatory compliance.

Let us understand with an example

Let us take an example of a pharmaceutical company developing an AI-driven platform, "Drug_AI," to expedite drug discovery. The company establishes a governance board comprising data scientists, ethicists, regulatory experts, legal advisors to oversee the AI model’s development. This board is responsible for ensuring Drug_AI aligns with ethical standards and complies with regulatory requirements set by the FDA. They implement policies to maintain transparency, accountability, and ethical AI deployment throughout the project lifecycle.

The governance board begins by mapping out the Drug_AI project. They identify the specific context in which Drug_AI will operate, including its integration into the drug discovery pipeline. The mapping process involves understanding the stakeholders affected by Drug_AI, such as researchers, clinicians, and patients. The board assesses potential impacts on these stakeholders, such as how Drug_AI could accelerate drug discovery, reduce costs, and improve patient outcomes. They also outline the scope of the AI application, detailing the data sources, algorithms, and expected outputs.

Once the mapping is complete, the pharmaceutical company focuses on measuring Drug_AI's performance. The team establishes metrics to evaluate the AI system’s accuracy, reliability, and potential biases. They set up a continuous monitoring system to track these metrics, ensuring Drug_AI operates as intended. For example, they regularly compare Drug_AI's predictions with actual outcomes from laboratory experiments to validate its accuracy. They also conduct bias audits to identify and mitigate any biases in the AI model, ensuring fair and equitable results.

As Drug_AI is deployed, the pharmaceutical company implements robust risk management strategies. They develop a comprehensive risk management plan to address any identified risks, such as data security, model drift, and regulatory compliance. The plan includes protocols for updating Drug_AI based on new data and insights. For instance, if new research data becomes available, the governance board ensures that Drug_AI's algorithms are retrained and validated before being integrated into the production environment. They also maintain detailed documentation and audit trails to support regulatory inspections and ensure compliance with FDA guidelines.

Through a structured approach, the pharmaceutical successfully develops and deploys Drug_AI. The governance board's oversight ensures that ethical standards and regulatory requirements are met. The mapping process provides a clear understanding of the project's context and impacts, while continuous measurement and risk management maintain Drug_AI’s accuracy, reliability, and compliance. This integrated use of NIST's AI RMF components—Govern, Map, Measure, and Manage—demonstrates how a comprehensive risk management framework can effectively support the development and deployment of AI technologies in the pharmaceuticals industry.

FDA’s Guidelines on AI and ML

The FDA has developed several guidelines and frameworks to regulate AI/ML technologies in medical devices, ensuring these innovations improve patient care while maintaining safety and effectiveness.

  1. AI/ML-Based Software as a Medical Device (SaMD) Action Plan: This action plan outlines a total product lifecycle (TPLC) approach to regulate AI/ML-based SaMD, focusing on continuous monitoring and improvement of AI technologies.
  2. Good Machine Learning Practice (GMLP) Guiding Principles: These principles, developed with international collaboration, provide a foundation for developing safe and effective AI/ML medical devices. They emphasize data management, algorithm transparency, and the inclusion of diverse clinical datasets to mitigate biases. Example - A company developing an AI model for predicting drug interactions might adhere to GMLP principles by ensuring their training datasets are representative of diverse patient populations. This helps prevent biases and ensures the model’s predictions are reliable across different demographic groups.
  3. Predetermined Change Control Plans for AI/ML-Enabled Medical Devices: This draft guidance proposes a framework for managing modifications in AI/ML-based medical devices, ensuring updates and changes maintain device safety and effectiveness. Example - A developer of an AI-based cardiac monitoring device might create a predetermined change control plan outlining how the device’s algorithm will be updated in response to new data. This ensures that each update undergoes rigorous testing and validation before being deployed.

Complementing NIST and FDA Frameworks

The NIST AI RMF and the FDA’s guidelines share common goals of promoting safe and effective AI technologies in lifesciences and healthcare. Their complementary roles can be seen in several areas:

  1. Governance: NIST’s emphasis on defining roles and responsibilities complements the FDA’s focus on regulatory oversight and accountability, ensuring that AI initiatives are ethically and effectively managed.
  2. Measurement and Validation: Both frameworks stress the importance of robust data management, continuous performance evaluation, and bias mitigation. This ensures that AI systems are reliable and trustworthy.
  3. Risk Management: The adaptive risk management strategies in NIST’s framework align with the FDA’s TPLC approach, facilitating the safe evolution of AI systems over time.

Some food for thought - a chance to synergize

Harmonization of Standards

Greater international harmonization of AI/ML standards across regulatory bodies can facilitate smoother global adoption and compliance. This would ensure consistent safety and efficacy standards globally.

Example - In the European Union, the General Data Protection Regulation (GDPR) impacts AI applications by enforcing strict data privacy standards. Ensuring that AI systems comply with both GDPR and FDA requirements can be challenging for global companies. Harmonizing standards would streamline compliance processes and reduce the regulatory burden on multinational corporations. Collaborative efforts between NIST, FDA, and international regulatory bodies to develop unified standards and guidelines for AI/ML technologies in lifesciences and healthcare. For instance, creating a global consortium to align the AI RMF with the European Medicines Agency (EMA) guidelines could be beneficial.

Real-World Performance Monitoring

Enhancing mechanisms for real-world performance monitoring can provide more accurate data for the continuous improvement of AI systems. This involves collecting and analyzing real-world evidence to assess AI performance in practical settings.

The FDA’s Sentinel Initiative collects real-world data to monitor the safety of FDA-regulated products. By leveraging similar real-world evidence systems for AI/ML medical devices, manufacturers can continuously monitor AI performance and make necessary adjustments to improve accuracy and effectiveness. For example, a medical technology company has built a system that uses real-time glucose monitoring data to improve diabetes management, demonstrating the value of real-world performance data.

Implementing advanced real-world data collection and analysis techniques to monitor AI performance will ensure timely updates and adaptations. Establishing a dedicated platform for AI/ML real-world performance monitoring could facilitate this process, providing a centralized repository for data and insights.

Transparency and Communication

Improving transparency in AI algorithm decision-making processes and clearer communication with end-users can increase trust and adoption rates. Users should understand how AI systems make decisions and the potential limitations of these technologies.

A software major's AI ethics board, established to oversee its AI developments, faced challenges due to a lack of transparency and clear communication with stakeholders. This led to public scrutiny and eventually disbandment. In contrast, many other large companies provide detailed documentation and explanations of their AI models, fostering greater trust and understanding among users.

Developing standardized reporting and communication protocols to explain AI system functionalities, limitations, and updates to healthcare professionals and patients is the key here. Creating user-friendly interfaces and educational materials can help demystify AI technologies, ensuring users are well-informed about the AI’s capabilities and limitations.

Bias Mitigation in AI Algorithms

Mitigating bias in AI algorithms is crucial for ensuring fair and equitable healthcare outcomes. AI systems trained on biased data can perpetuate existing disparities in life sciences and healthcare.

A study in 2019 found that an algorithm used to predict which patients would benefit from extra medical care was less likely to recommend patients of a particular colour vis-a-vis other patients with the same health conditions because the algorithm was trained on biased data. Addressing such biases is essential for equitable healthcare.

Developing comprehensive guidelines for bias detection and mitigation in AI systems is a must. This includes ensuring diverse and representative datasets, implementing fairness audits, and continuously monitoring AI systems for potential biases.

In conclusion

NIST’s AI Risk Management Framework and the FDA’s guidelines collectively provide a robust foundation for managing the complexities associated with AI/ML technologies in the pharmaceuticals and medical devices industry. By aligning their efforts and addressing identified gaps, these frameworks can ensure that AI innovations are both safe and beneficial, ultimately enhancing patient care and advancing healthcare outcomes.

References

  1. National Institute of Standards and Technology. (2023). AI Risk Management Framework (AI RMF 1.0)
  2. U.S. Food and Drug Administration. (2021). AI/ML-Based Software as a Medical Device Action Plan
  3. U.S. Food and Drug Administration. (2021). Good Machine Learning Practice for Medical Device Development: Guiding Principles
  4. U.S. Food and Drug Administration. (2023). Draft Guidance on Predetermined Change Control Plans for AI/ML-Enabled Medical Devices
  5. U.S. Food and Drug Administration. (2024). Artificial Intelligence and Medical Products: How CBER, CDER, CDRH, and OCP are Working Together


Disclaimer: The article is the author's point of view on the subject based on his understanding and interpretation of the regulations and their application. Do note that AI has been leveraged for the article's first draft to build an initial story covering the points provided by the author. Post that, the author has reviewed, updated, and appended to ensure accuracy and completeness to the best of his ability. Please use this after reviewing it for the intended purpose. It is free for use by anyone till the author is credited for the piece of work.

Alagu R.

Technical Specialist, Consulting, ITQA Pharmacovigilance , Drug Safety, Life Sciences IT Quality and Compliance, Quality Assurance, CSV Validation, CSA, GAMP, GxP, Data Integrity, Regulatory compliance

3 个月

AI/ML might be useful in the repetitive and repeated tasks where there are less variables to map for. It can even be useful when the use case is narrow and very specific, but general purposing or a broad based application will be an adventure proved to be costly since life science and Health Care are not just data but patient health metrics

Mrutyunjaya (Jay) Hota

An avid learner || Intelligent Automation Enthusiastic || Agile Practitioner

4 个月

Amazing insights Ankur

要查看或添加评论,请登录

社区洞察

其他会员也浏览了