What the FDA and EMA Discussion Papers Reveal About AI in Clinical Research?
The use of artificial intelligence (AI) with the right human oversight can significantly improve clinical processes. For example, AI-assisted solutions can help study teams develop their studies faster by reducing repetitive and time-consuming tasks. However, there is still a lack of standardization in AI that can help address real and perceived risks by making AI safe, compliant, and predictable.?
The discussion over standardizing AI for safe use in clinical research is ongoing. With the recent publication of discussion papers on AI in drug development by the US Food and Drug Administration (FDA) and European Medicines Agency (EMA), regulatory bodies are now adding their voices to the conversation.
The FDA and EMA papers are not efforts to legislate AI technology. Instead, the papers are meant to kick-start conversations in the research community on optimizing the use of AI in drug development without unintentionally harming patients. In a larger sense, these papers symbolize the growing acceptance of AI in clinical trials.??
Although the EMA and FDA differ in their approach to the ethics of medical applications for AI technology, they share common concerns: balancing innovation with patient safety, finding transparency within relatively opaque systems, and creating unbiased, trustworthy AI models. Understanding how regulators approach the ethics of AI will benefit researchers reviewing the uses of AI technology in their trials.
EMA: Regulatory scrutiny for AI depends on risk
The EMA reflection paper recommends a risk-based approach to monitoring AI in drug development. Some uses of AI are considered low risk, such as early drug discovery research that doesn’t directly support the overall body of evidence for the safety and efficacy of a drug. AI models used for tasks like dosing or treatment assignment are considered high-risk because they could directly impact participant health.??
Where there is more risk, there needs to be more involvement from regulatory bodies. According to the EMA, “If an AI/ML system is used in the context of medicinal product development, evaluation, or monitoring, and is expected to impact, even potentially, on the benefit-risk of a medicinal product, early regulatory interaction […] is advised.”
FDA: Emphasis on monitoring and human oversight ?
While the EMA takes a risk-based approach to AI in research, the FDA discussion paper focuses on how to monitor AI systems. Reviewing input data for error and bias and then using tools to trace the model’s decision-making gives researchers more control over the model’s output. Reliable AI systems mitigate bias and provide “explainable” results that regulators can trace back to data.
In addition to ensuring systems are reliable, the FDA recommends human oversight throughout the lifecycle of AI models. Involving experts in each stage of the process, not only during early development, will help catch problems before they impact results.???
“Human-led AI/ML governance can help ensure adherence to legal and ethical values, where accountability and transparency are essential for the development of trustworthy AI. Such governance and clear accountability may extend across the spectrum of planning, development, use, modification, and discontinuation (as applicable) of AI/ML in the drug development process.” -US Food and Drug Administration
3 considerations for using AI in your clinical research
Despite their different approaches, the FDA and EMA share similar suggestions for safely incorporating AI in trials.
领英推荐
Develop an AI risk management plan
Creating a plan to assess risks is essential to keeping them in check. Consider comprehensive methods for documentation, procedures for ensuring traceability, and the rationale for deviations from the plan. AI-assisted technology can reduce the time researchers spend on repetitive tasks like documentation, helping them manage risks without significantly increasing their administrative workload.?
Addressing risks throughout the lifecycle of the AI helps ensure ethical use. The EMA explains risks “need to be mitigated both during model development and deployment to ensure the safety of patients and integrity of clinical study results.”
Feed AI quality data
Quality input data helps ensure AI systems provide reliable output. The FDA recommends reviewing input data for bias, integrity, provenance, security, relevance, replicability, reproducibility, and representativeness. AI tools can simplify the complex task of gathering input data and feeding it into algorithms.?
The EMA recommends that researchers document how they acquired and handled data and do an exploratory analysis to ensure the data is fair, representative, and relevant. Since AI models are only as good as their data, “[a]ll efforts should be made to acquire a balanced training dataset, considering the potential need to over-sample rare populations, and taking all relevant bases of discrimination […] into account.” -European Medicines Agency
Prioritize transparency
Transparency, the ability to understand an AI model’s inner workings or explain the decisions made by its algorithm, is a key concern for regulatory bodies. The EMA allows “black box” models when transparent models are unavailable. However, researchers using these proprietary models must build the rationale for their decision and have training metrics, validation and test results, and risk management plans in place.
In addition to transparent models, transparent communication about the use of AI encourages acceptance from regulatory bodies. According to the FDA, “transparency and documentation across the entire product life cycle can help build trust in the use of AI/ML. In this regard, it may be important to consider pre-specification and documentation of the purpose or question of interest, context of use, risk, and development of AI/ML.”
Where transparency is a key concern for regulatory bodies, usability is key for participants and study teams. Choosing AI-assisted technology that is easy to navigate and integrate into existing systems makes a huge difference for those involved in trials.?
AI in research: A very important conversation
From streamlining the design for decentralized clinical trials to refining recruitment and participant selection to supporting long-term follow-up, both the EMA and FDA acknowledged the potential of AI technology in improving the current clinical processes.?
The FDA and EMA discussion papers begin regulatory involvement in using AI in clinical trials. If anything, these statements tell us that regulatory agencies are taking AI seriously and consider the technology beneficial for developing and testing medications to improve patients’ lives.
Considering how AI might fit in your next trial? Castor is leading the way in finding strategic applications of AI to improve research. Talk to an expert about how our AI-enabled technology can power your clinical trial.
Bal Harbor artist, designer, inventor, & entrepreneur
12 个月NEED AI EXPERT FOR SPEEDY CLINICAL TRIALS