Artificial Intelligence (AI) and Quality Risk Management - Maintaining state of validation
Ankur Mitra
Quality, Regulations, Technology - Connecting the Dots - And a Lot of Questions
Introduction??
Artificial Intelligence (AI) and Generative AI (GenAI) can overhaul the entire life sciences and healthcare industry, driving innovations and efficiencies never heard before and leading to more informed decision-making. Yet, industry leaders often see these advancements with an air of suspicion. After over a dozen discussions with industry leaders, the common skepticism is how we will implement in the regulated ecosystem. Every discussion that I have had begins with these two questions:
These are critical questions. If AI systems lose their validated state or if we over-rely on human intervention, we may fail to achieve the true potential of AI. This has led to the emergence of has-anyone-done-it-before syndrome, where industry leaders are waiting for someone else to take the first plunge. My suggestion to them has been that a balanced approach is required that will help achieve expected efficiencies while ensuring that patient safety, product quality, and data integrity are never compromised. Quality Risk Management (QRM) provides the structured approach we need to maintain AI systems' validated state and define the right level of human intervention. Moreover, moving beyond every individual IT transaction, we need to view AI systems through the lens of an overall business process. Here is how I see QRM as the foundation for deploying AI systems that are compliant, reliable, and effective. Read along, and I may be able to answer some of the queries popping up in your minds at this point in time.
Maintaining the Validated State of AI Models
Risk-Based Validation Is Key??
For AI in a GxP environment, I cannot stress more on the risk-based approach to validation. While some will say what is new there, with CSA and everything; that the industry knows and understands CSA and risk very well (though, even there, acceptance is still at its early stages). We need to understand that in an AI landscape, this is not optional; rather a mandatory starting point.
Let us break this down to the moot question - how we can achieve all this:
Preemptive Checks: We should ensure the model undergoes rigorous testing with predefined test cases before deployment.
Restricted Learning Zones: We should define which aspects of the model are allowed to learn, minimizing risks of unintentional changes that could invalidate the model. Build a robust architectural design and set up governance policies that define which parts of the model are eligible for updates.
Balancing Learning and Adaptability??
There is always a fine line here. If we over-restrict the model’s ability to learn, we may prevent it from adapting to new data patterns, which could decrease its accuracy. On the other hand, uncontrolled learning can make the model unreliable. Balancing learning and adaptability in AI systems within a GxP-regulated environment requires a well-thought-of, strategic, and risk-based approach. The key lies in assessing and classifying risks associated with different components of the model and tailoring actions accordingly.
High-risk areas, such as components handling regulatory compliance or critical data interpretation, should remain locked to preserve their validated state. These areas must undergo rigorous re-validation before any changes are permitted. Moderate-risk areas may allow controlled updates but require periodic reviews to ensure compliance and performance integrity. Low-risk areas, on the other hand, can adapt more freely, as long as they undergo regular monitoring to catch any potential issues early. In the end, we should adopt a balanced approach, periodically testing new data in a controlled sandbox environment to determine if the model can be updated without compromising its validated state.
Strategically Positioned Human Intervention
Think Business Process, Not IT Transactions??
When discussing human intervention, I always emphasize the importance of viewing the system from a business process perspective. AI models can impact multiple interconnected IT transactions, but we should evaluate risk based on the overall business impact. Here are a few of my recommendations:
Human-in-the-Loop (HITL) Strategy??
A Human-in-the-Loop strategy is essential but must be well-calibrated. As I have mentioned earlier, too much of it may defeat the purpose, and too little of it may impact reliability. So, I would design HITL approaches to ensure humans are involved where and when it matters most:
Preemptive Quality Checks and Monitoring
Quality checks should never be static. Few of my recommendations to ensure they are effective:
Few Considerations for Accuracy, Reliability, and Consistency
To ensure our AI systems are not just compliant but also performant, I would not change the fundamentals - they remain constant irrespective of technological changes!
In Conclusion: Embracing AI Responsibly in GxP
Quality Risk Management is not just a nice-to-have—it’s the linchpin for deploying AI in GxP-regulated environments. By adopting a "true" risk-based approach to validation and strategically positioning human intervention, we can harness AI’s capabilities without compromising patient safety or quality compliance. The key is shifting our mindset from focusing on individual IT transactions to a comprehensive view of the business process. As AI technology evolves, our QRM frameworks must remain agile, incorporating the latest regulatory guidance and best practices. This balance of innovation and compliance will allow us to fully leverage AI’s potential while ensuring the utmost integrity, reliability, and safety.
Disclaimer: The article is the author's point of view on the subject based on his understanding and interpretation of the regulations and their application. Do note that AI has been leveraged for the article's first draft to build an initial story covering the points provided by the author. Post that, the author has reviewed, updated, and appended to ensure accuracy and completeness to the best of his ability. Please use this after reviewing it for the intended purpose. It is free for use by anyone till the author is credited for the piece of work.
I help Academia & Corporates through AI-powered Learning & Growth | Facilitator - Active Learning | Development & Performance Coach | Impactful eLearning
2 周Love the focus on Quality Risk Management in AI! Excited to read your insightful article on this topic. Looking forward to learning more about maintaining compliance. I invite you to our community to contribute and grow together. LinkedIn group: https://www.dhirubhai.net/groups/14532352/.
Validation & Compliance Lead || Risk Management
2 周Very helpful and interesting.