EMA Cites Risk and Impact on the role of AI
Last week the European Medicines Agency (EMA) released a reflection paper on the future of AI and ML (the process where models are trained from data without explicit programming) related to the entire product life cycle of medicinal products. The EMA paper offers a few guardrails around how we think about using AI as we discover and use medicinal products. Specifically, the risks of incorporating AI and the impact of its use and being willing to look under the hood at how AI is developed and employed.
AI risk and its impact on the medicine lifecycle
The EMA reflection document walks through the significant steps in the lifecycle of a medicine. Each phase has potential risks and a range of impacts for using AI, from discovery to clinical trials to manufacturing to the post-authorization phase. While the paper focuses on a drug’s lifecycle, the caveats and procedures mentioned look like good practices for wherever we use AI.?
The data generated from many sectors will feed into AI models over time, and it is good to be prepared. The goal is to recognize when “new risks are introduced that need to be mitigated to ensure the safety of patients and integrity of clinical study results.” For instance:
The EMA encourages interaction with regulators on the risks and impact of their use of AI—especially where “clearly applicable written guidance” is available. The timing of those regulatory conversations may be guided by the level of impact of using AI. For high-impact cases, discussions at the planning stage may be necessary.
Maintain good data practices
The EMA is okay with diving into the details of AI and ML. Acquiring and augmenting datasets must follow good data practices of documenting data processing, like data cleaning, transformation, imputation, and annotation. Model development is worth paying attention to because it can affect how generalizable the results are. Training and model validation should be assessed for high-risk, high-impact settings, and newly acquired data should be prospectively tested.
领英推荐
Ethical concerns about the use of AI
The EMA reflection paper is careful to describe good ethics around AI, as they were presented in the Assessment List for Trustworthy Artificial Intelligence for self-assessment (ALTAI). Those guidelines include:?
AI and ML show great promise for “enhancing all phases of the medicinal product lifecycle.” That’s why it is important to develop standard operating procedures and best practices around the uses of the tools. Given the data-driven nature of the tools, users must be proactive about removing bias in AI/ML applications. Adhering to legal requirements is expected and essential, along with following ethical guidelines.
These EMA reflections present a cautious step toward embracing this new set of tools. And collecting your data electronically is a good beginning for using AI - learn how Castor helps streamline data for your studies here.