Artificial Intelligence (AI) and Quality Risk Management - Maintaining state of validation

Artificial Intelligence (AI) and Quality Risk Management - Maintaining state of validation

Introduction??

Artificial Intelligence (AI) and Generative AI (GenAI) can overhaul the entire life sciences and healthcare industry, driving innovations and efficiencies never heard before and leading to more informed decision-making. Yet, industry leaders often see these advancements with an air of suspicion. After over a dozen discussions with industry leaders, the common skepticism is how we will implement in the regulated ecosystem. Every discussion that I have had begins with these two questions:

  1. How do we keep the model validated if it keeps learning and changing???
  2. How much human intervention is necessary, and does it defeat the purpose of AI?

These are critical questions. If AI systems lose their validated state or if we over-rely on human intervention, we may fail to achieve the true potential of AI. This has led to the emergence of has-anyone-done-it-before syndrome, where industry leaders are waiting for someone else to take the first plunge. My suggestion to them has been that a balanced approach is required that will help achieve expected efficiencies while ensuring that patient safety, product quality, and data integrity are never compromised. Quality Risk Management (QRM) provides the structured approach we need to maintain AI systems' validated state and define the right level of human intervention. Moreover, moving beyond every individual IT transaction, we need to view AI systems through the lens of an overall business process. Here is how I see QRM as the foundation for deploying AI systems that are compliant, reliable, and effective. Read along, and I may be able to answer some of the queries popping up in your minds at this point in time.

Maintaining the Validated State of AI Models

Risk-Based Validation Is Key??

For AI in a GxP environment, I cannot stress more on the risk-based approach to validation. While some will say what is new there, with CSA and everything; that the industry knows and understands CSA and risk very well (though, even there, acceptance is still at its early stages). We need to understand that in an AI landscape, this is not optional; rather a mandatory starting point.

Let us break this down to the moot question - how we can achieve all this:

  • Define the Critical Parameters: Start with identifying which model parameters will most affect the AI's performance and quality. By establishing those clear thresholds and constraints, we can control how much the model learns and ensure that it doesn’t drift away from its validated state.
  • Controlled Learning Mechanisms: For me, governing the model's ability to learn and adapt is the most effective step. This involves “locking” the model after validation and restricting learning to controlled environments. For example,

Preemptive Checks: We should ensure the model undergoes rigorous testing with predefined test cases before deployment.

Restricted Learning Zones: We should define which aspects of the model are allowed to learn, minimizing risks of unintentional changes that could invalidate the model. Build a robust architectural design and set up governance policies that define which parts of the model are eligible for updates.

  • Continuous Monitoring and Drift Detection: Implementing robust monitoring tools is crucial. These tools should continuously track model performance, with alerts for any deviations that indicate potential model drift. This layered monitoring approach will ensure early detection and correction, preserving the system’s reliability.

Balancing Learning and Adaptability??

There is always a fine line here. If we over-restrict the model’s ability to learn, we may prevent it from adapting to new data patterns, which could decrease its accuracy. On the other hand, uncontrolled learning can make the model unreliable. Balancing learning and adaptability in AI systems within a GxP-regulated environment requires a well-thought-of, strategic, and risk-based approach. The key lies in assessing and classifying risks associated with different components of the model and tailoring actions accordingly.

High-risk areas, such as components handling regulatory compliance or critical data interpretation, should remain locked to preserve their validated state. These areas must undergo rigorous re-validation before any changes are permitted. Moderate-risk areas may allow controlled updates but require periodic reviews to ensure compliance and performance integrity. Low-risk areas, on the other hand, can adapt more freely, as long as they undergo regular monitoring to catch any potential issues early. In the end, we should adopt a balanced approach, periodically testing new data in a controlled sandbox environment to determine if the model can be updated without compromising its validated state.

Strategically Positioned Human Intervention

Think Business Process, Not IT Transactions??

When discussing human intervention, I always emphasize the importance of viewing the system from a business process perspective. AI models can impact multiple interconnected IT transactions, but we should evaluate risk based on the overall business impact. Here are a few of my recommendations:

  • Map Out Human Touchpoints: Analyze the business process and identify where human oversight is most critical. Human intervention is necessary at multiple stages for high-risk processes that could affect patient safety. For lower-risk processes, automated checks can suffice, supplemented by periodic reviews.
  • Frequency and Escalation Mechanisms: It is essential to establish how often humans should review AI outputs. Design and set up an escalation plan for when anomalies or exceptions occur.

Human-in-the-Loop (HITL) Strategy??

A Human-in-the-Loop strategy is essential but must be well-calibrated. As I have mentioned earlier, too much of it may defeat the purpose, and too little of it may impact reliability. So, I would design HITL approaches to ensure humans are involved where and when it matters most:

  • Review and Approve: Humans should have the authority to review and approve AI outputs for critical decisions before execution.
  • Exception Handling: We should leverage AI for routine tasks while reserving human intervention for anomalies that fall outside the defined risk parameters. This way, we maintain efficiency without sacrificing compliance.

Preemptive Quality Checks and Monitoring

Quality checks should never be static. Few of my recommendations to ensure they are effective:

  • Model Verification and Preemptive Testing: Before deployment, we should rigorously test models to ensure they consistently meet their intended purpose. This should include verifying performance under different scenarios using real-world data through the real-world business process, and stress-testing the model for reliability—consider edge cases, potential drift conditions, and extreme use cases.
  • Data Quality Controls: Data is the backbone of any AI system, and data quality is paramount in GxP. Designing and implementing real-time validation mechanisms and data provenance tracking will ensure alignment with regulations like 21 CFR Part 11 and EU Annex 11. Real-time validation will help catch data quality and integrity issues instantly, allowing for immediate corrective actions (as appropriate) and minimizing the risk of downstream errors. Regular updates to data validation criteria will help ensure our controls remain robust and adapt to evolving regulatory standards.

Few Considerations for Accuracy, Reliability, and Consistency

To ensure our AI systems are not just compliant but also performant, I would not change the fundamentals - they remain constant irrespective of technological changes!

  • Version Control and Traceability: Maintaining accurate and complete version control is crucial. Every change in the model and data pipeline should be documented and traceable. If something goes wrong, we need to know the exact source of the issue and how we got it wrong. I often say this in my sessions: we should be able to reverse engineer through exact steps from the details.
  • Data Quality Assurance: Ensuring data quality isn’t just a one-time task; it requires an ongoing, proactive approach. I recommend implementing continuous data validation processes, which involve automated checks that monitor data integrity, consistency, and accuracy in real-time. This includes deploying anomaly detection algorithms that flag deviations immediately, enabling corrective measures before any significant impact occurs. Additionally, real-time data profiling and validation rules should be integrated into data pipelines to monitor for data drift—where the statistical properties of data change over time—so adjustments can be made promptly. Regular audits and periodic reviews of validation criteria are also essential to ensure the system remains compliant and robust as new data is introduced.
  • Regulatory Compliance Audits: Periodic audits go beyond checking if the model works; they verify that the entire system, from data input to model output, adheres to regulatory standards. In the context of AI, look from the perspective of data first. This is often seen as a part of the system in the case of traditional technologies - here, data through its lifecycle should be evaluated independently from the system, and then the marriage of both should be evaluated. I recommend scheduling regular audits to catch and address potential compliance issues early.

In Conclusion: Embracing AI Responsibly in GxP

Quality Risk Management is not just a nice-to-have—it’s the linchpin for deploying AI in GxP-regulated environments. By adopting a "true" risk-based approach to validation and strategically positioning human intervention, we can harness AI’s capabilities without compromising patient safety or quality compliance. The key is shifting our mindset from focusing on individual IT transactions to a comprehensive view of the business process. As AI technology evolves, our QRM frameworks must remain agile, incorporating the latest regulatory guidance and best practices. This balance of innovation and compliance will allow us to fully leverage AI’s potential while ensuring the utmost integrity, reliability, and safety.


Disclaimer: The article is the author's point of view on the subject based on his understanding and interpretation of the regulations and their application. Do note that AI has been leveraged for the article's first draft to build an initial story covering the points provided by the author. Post that, the author has reviewed, updated, and appended to ensure accuracy and completeness to the best of his ability. Please use this after reviewing it for the intended purpose. It is free for use by anyone till the author is credited for the piece of work.

Naveen Raju

I help Academia & Corporates through AI-powered Learning & Growth | Facilitator - Active Learning | Development & Performance Coach | Impactful eLearning

2 周

Love the focus on Quality Risk Management in AI! Excited to read your insightful article on this topic. Looking forward to learning more about maintaining compliance. I invite you to our community to contribute and grow together. LinkedIn group: https://www.dhirubhai.net/groups/14532352/.

Sambuddha Mitra

Validation & Compliance Lead || Risk Management

2 周

Very helpful and interesting.

要查看或添加评论,请登录