EMA Cites Risk and Impact on the role of AI
Artificial Intelligence (AI) will be a crucial partner in moving toward a future of curing disease. Establishing a few caveats and boundaries today will help us make the best use of this profound tool set.

EMA Cites Risk and Impact on the role of AI

Last week the European Medicines Agency (EMA) released a reflection paper on the future of AI and ML (the process where models are trained from data without explicit programming) related to the entire product life cycle of medicinal products. The EMA paper offers a few guardrails around how we think about using AI as we discover and use medicinal products. Specifically, the risks of incorporating AI and the impact of its use and being willing to look under the hood at how AI is developed and employed.

AI risk and its impact on the medicine lifecycle

The EMA reflection document walks through the significant steps in the lifecycle of a medicine. Each phase has potential risks and a range of impacts for using AI, from discovery to clinical trials to manufacturing to the post-authorization phase. While the paper focuses on a drug’s lifecycle, the caveats and procedures mentioned look like good practices for wherever we use AI.?

The data generated from many sectors will feed into AI models over time, and it is good to be prepared. The goal is to recognize when “new risks are introduced that need to be mitigated to ensure the safety of patients and integrity of clinical study results.” For instance:

  • In the drug discovery phase, AI may be low-risk from a regulatory perspective because lack of performance affects the sponsor (versus the regulatory body). But if these results contribute to the total body of evidence, any non-clinical development of the models used for AI must be reviewed for adherence to ethical issues, risks of bias, and other sources of discrimination. Keeping that in mind from the beginning is essential.
  • For clinical trials, expect the complete model architecture to be available for comprehensive assessment during market authorization or clinical trial application. When AI systems are used for the clinical management of an individual patient, they may be considered medical devices and subject to specific guidance that may affect AI/ML.?
  • In the post-authorization phase (the FDA calls the “postmarketing surveillance” phase), the EMA expects AI to play a significant role. Ongoing studies may use AI to help gauge efficacy and safety and pharmacovigilance activities like adverse event reporting. The paper’s authors recognize that incremental learning will occur in AI modeling but that it is still critical for the marketing authorization applicant to “validate, monitor, and document model performance.”?

The EMA encourages interaction with regulators on the risks and impact of their use of AI—especially where “clearly applicable written guidance” is available. The timing of those regulatory conversations may be guided by the level of impact of using AI. For high-impact cases, discussions at the planning stage may be necessary.

Maintain good data practices

The EMA is okay with diving into the details of AI and ML. Acquiring and augmenting datasets must follow good data practices of documenting data processing, like data cleaning, transformation, imputation, and annotation. Model development is worth paying attention to because it can affect how generalizable the results are. Training and model validation should be assessed for high-risk, high-impact settings, and newly acquired data should be prospectively tested.

Ethical concerns about the use of AI

The EMA reflection paper is careful to describe good ethics around AI, as they were presented in the Assessment List for Trustworthy Artificial Intelligence for self-assessment (ALTAI). Those guidelines include:?

  • Human agency and oversight
  • Technical robustness and safety
  • Privacy and data governance
  • Transparency accountability
  • Societal and environmental well-being
  • Diversity, non-discrimination, and fairness

AI and ML show great promise for “enhancing all phases of the medicinal product lifecycle.” That’s why it is important to develop standard operating procedures and best practices around the uses of the tools. Given the data-driven nature of the tools, users must be proactive about removing bias in AI/ML applications. Adhering to legal requirements is expected and essential, along with following ethical guidelines.


These EMA reflections present a cautious step toward embracing this new set of tools. And collecting your data electronically is a good beginning for using AI - learn how Castor helps streamline data for your studies here.


要查看或添加评论,请登录

社区洞察

其他会员也浏览了