Current Regulatory Thinking on AI in GxP-Regulated Ecosystems
Ankur Mitra
Quality, Regulations, Technology - Connecting the Dots - And a Lot of Questions
While Artificial Intelligence (AI) has the potential to revolutionise the Life Sciences and Healthcare (LSHC) industry, there is slower-than-expected adoption of AI. This is primarily due to lack of clarity in handling AI in GxP regulated ecosystem. So, it has become extremely crucial to understand global regulatory expectations that drive this adoption. Moreover, compliance to the regulations and guidelines will assure patient safety, data quality, and integrity. In this article, I try to summarise regulatory frameworks required for AI systems used in GxP-regulated industries. I have based this article on the EU AI Act and FDA, EMA, MHRA, and NIST guidelines. I have also included some points of discussion regarding challenges or gaps and suggestions to control them.
EU AI Act 2023
The European Union adopted the first comprehensive legislative framework in 2023 to regulate artificial intelligence technologies. Also known as the EU AI Act, it uses a risk-based classification system, categorising AI applications into unacceptable, high-risk, limited-risk, and minimal-risk categories based on their potential to impact human safety, rights and dignity. The goal is to ensure AI systems operate safely, transparently, and ethically.
The Act emphasises compliance with data governance, transparency, human oversight, and accountability principles, ensuring AI technologies respect human dignity, fundamental rights, and safety and maintain trust. LSHC applications (like medical diagnostics, robotic surgery, clinical decision support systems, etc.) are high-risk and require stringent regulatory oversight. Key expectations from the EU AI Act include the following:
Data integrity: AI systems must use high-quality, unbiased data and maintain clear data lineage to ensure safety in decision-making.
Transparency: These systems must be explainable to humans and regulators.
Continuous monitoring: High-risk systems should be validated and monitored for safety on an ongoing basis, with an increased emphasis on the user and stakeholder feedback loop.
FDA: Good Machine Learning Practice and AI/ML Action Plan
US FDA has established several guidelines for AI in GxP-regulated environments - Good Machine Learning Practice and the AI/ML Software as a Medical Device (SaMD) Action Plan are two of the key ones. These guidelines emphasise:
Good Machine Learning Practice (GMLP): GMLP emphasises transparency, safety, and robust data management.
Predetermined change control: AI systems should be revalidated whenever updates or changes are made. The changes should be predetermined.
Real-world evidence: Continuous monitoring using real-world data is required to assess AI model's ongoing performance and safety.
AI Lifecycle Management Framework: The framework expands the different phases of the development lifecycle, such as design, data collection, validation, and real-world monitoring.
The FDA also stresses transparency, particularly in terms of how AI systems reach to their decisions and the need for human oversight, ensuring that humans can override any AI decisions when necessary.
EMA: Real-World Data and Post-Market Surveillance
The European Medicines Agency (EMA) provides a framework for AI in GxP environments, particularly in drug safety and post-market surveillance. The EMA emphasises the importance of real-world data in evaluating the long-term safety and effectiveness of AI systems. AI must undergo:
Continuous validation: AI models should be tested and revalidated as they interact with new data.
Bias mitigation: AI systems should actively avoid biased decision-making by ensuring the training data is real-world representative data.
Real-world Performance Evaluation and CyberSecurity: These are critical in ensuring that AI systems remain reliable post-deployment.
MHRA: AI as a Medical Device (AIaMD)
Medicines and Healthcare Products Regulatory Agency (MHRA) has developed a comprehensive framework for AI as a Medical Device (AIaMD). This strategy is designed to:
Ensure a balance between risk-based regulation and innovation.
Ensure explainability and human oversight in all AI-driven medical decisions.
Post-market surveillance obligations should focus on continuous risk assessment and updating based on real-world data.
AI systems that support post-market surveillance should be subject to continuous monitoring and revalidation.
NIST AI Risk Management Framework (AI RMF)
The National Institute of Standards and Technology (NIST) provides the AI Risk Management Framework (AI RMF). This is a framework designed to help organisations manage AI-associated risks. While voluntary, this is a widely used framework and should be clearly understood. The AI RMF focuses on:
Transparency and explainability: Ensure AI systems produce outputs that humans can understand.
Bias detection and cybersecurity: AI models must be protected against vulnerabilities and biases that could impact their decision-making capabilities.
Risk-based approach: Continuous monitoring is required for AI models to ensure compliance and reliability.
Cybersecurity: Adversarial robustness and resilience against cyberattacks are required, especially for models dealing with healthcare data. This should be ensured across the AI lifecycle.
领英推荐
Convergence of Regulatory Expectations
Despite varying regulatory environments, it would not be wrong to say that they mirror each other in more than one way. The common expectations across jurisdictions include:
Data integrity and governance: AI models must process and output high-quality, validated data that adhere to GxP requirements. All agencies emphasise the importance of data provenance, bias mitigation, and audit trails.
Transparency and explainability: AI systems must be interpretable by human and provide clear explanations for decisions, especially in high-risk medical applications.
Human oversight and accountability: Human intervention should be considered for AI systems, ensuring safety and ethical use. This is also called as a human-in-the-loop approach to decision-making.
Continuous monitoring and revalidation: AI systems must be regularly validated to ensure compliance, especially after updates or changes. Real-world data plays a crucial role in ongoing validation efforts.
Cross-border data flow regulations: Security and Privacy laws mandate strict controls over transferring, storing, and processing personal data across jurisdictions and should be considered for data governance and AI model deployment.
Challenges and Considerations
Bias and Data Quality: Poor-quality or biased training datasets can result in faulty AI outcomes, affecting everything from drug discovery to patient diagnostics. Both the FDA and EMA emphasise the importance of bias mitigation and data governance to ensure AI systems operate reliably and safely. Regulatory bodies expect organisations to adopt rigorous data quality standards and bias detection mechanisms.
Explainability and Transparency: One of AI's most significant challenges in healthcare and life sciences is the "black-box" nature of many machine learning models. FDA, EMA, and MHRA require AI systems to be explainable, especially in high-risk applications like clinical decision support systems. This transparency can enable healthcare providers and regulators to trust AI decisions.
Model Drift and Continuous Validation: AI models tend to degrade in performance over time, a phenomenon known as model drift. Continuous monitoring, validation, and retraining of AI models are necessary to maintain compliance, especially in GxP environments. This is a core requirement across multiple guidelines, including those from the FDA and EMA. Both stress on the need for ongoing validation, especially when AI systems interact with new real-world data.
Double Certification and Inconsistent Definitions: Different jurisdictions sometimes define AI-related terms (such as "high-risk systems") differently. This can lead to global deployment challenges. Additionally, the need for dual certification, such as under both the Medical Device Regulation (MDR) and the EU AI Act, can increase complexity and costs for companies.
Challenges Due to Non-Clarity from Regulators and Recommendations to Organisations
Although there is an increasing alignment between regulatory frameworks, some challenges remain leading to a lack of clarity. These include:
Inconsistent definitions: Different jurisdictions define key terms (like high-risk AI systems or explainability) differently, often leading to confusion.
Double certification: AI systems that fall under medical device regulations may need dual certification, complicating compliance efforts and delaying market entry.
Undefined update procedures: While the FDA offers some clarity on predetermined change control, such is not the case for other regulators. They provide limited guidance on how AI systems revalidation should be controlled.
Organisations can take a few steps to address these challenges.
Adopt a harmonised framework: Align AI systems with globally recognised standards such as GMLP and NIST AI RMF to ensure compliance across multiple jurisdictions.
Engage with regulators early: Regular consultations with regulatory agencies can help clarify ambiguities and streamline the overall process.
Establish robust governance: Implement comprehensive governance structures to manage risk, bias, and data quality throughout the AI lifecycle.
Utilise regulatory sandboxes: Leverage regulatory sandboxes provided by agencies like the EU AI Act to test AI models in controlled environments, ensuring a balance between compliance and innovation.
Adaptive AI systems: Once reliability improves, these can mitigate the burden of constant revalidation.
Algorithmic Auditing: This can emerge as a key tool for handling update procedures across the regulatory landscape.
In Conclusion
Regulatory compliance for AI in GxP-regulated environments is evolving, with converging expectations around data integrity, transparency, human oversight, and continuous monitoring. Organisations can ensure their AI systems are safe, effective, and compliant by adhering to guidelines from regulatory agencies. Understanding these global guidelines and addressing challenges through proactive engagement and a risk-based approach will allow organisations to harness the transformative potential of AI in LSHC. Future regulatory trends also point to increasing collaboration between agencies, such as the FDA-EMA joint working group on AI will pave ways for further innovation as technological benefits evolve.
Sources
Disclaimer: The article is the author's point of view on the subject based on his understanding and interpretation of the regulations and their application. Do note that AI has been leveraged for the article's first draft to build an initial story covering the points provided by the author. Post that, the author has reviewed, updated, and appended to ensure accuracy and completeness to the best of his ability. Please use this after reviewing it for the intended purpose. It is free for use by anyone till the author is credited for the piece of work.
Assistant Consultant (Compliance and CSV Lead) at Tata Consultancy Services
2 个月Very informative
Project Management | Lifescience | Computer System Validation | Risk Management | IT Auditor | Business Analyst | IT Compliance | H1B Visa holder
2 个月Very informative.
Associate Director, AstraZeneca
2 个月Very insightful!!