AI in Pharma: Key Regulatory Developments
Chandramouli R
Global Technical Enablement Engineer at JMP | Driving Innovation in Pharma, Healthcare, and Life Sciences through Advanced Data Solutions
The rapid integration of Artificial Intelligence (AI) in the pharmaceutical industry has necessitated a parallel evolution in regulatory frameworks across global jurisdictions. Between 2020 and 2025, regulatory bodies such as the U.S. FDA, the European Medicines Agency (EMA), and China’s National Medical Products Administration (NMPA) have taken significant steps to establish AI governance tailored to drug development, clinical trials, manufacturing, and pharmacovigilance. While AI promises to enhance efficiency and innovation, regulators have focused on ensuring transparency, reliability, and accountability. The FDA’s 2025 draft guidance emphasizes a risk-based credibility assessment framework, while the EU’s AI Act has set stringent compliance measures for high-risk AI applications in healthcare. Countries like the UK, Canada, Japan, and India have also made strides, aligning with international best practices or formulating country-specific ethical guidelines. Despite efforts toward harmonization, regulatory variations exist at national and sub-national levels, particularly in the U.S., where state-level AI laws are emerging alongside federal oversight.
The FDA has increasingly engaged with AI as its use in drug R&D has expanded. The Center for Drug Evaluation and Research (CDER) reported a sharp rise in drug submissions incorporating AI across nonclinical, clinical, manufacturing, and post-market phases. In response, FDA released a landmark draft guidance in January 2025 titled “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products.” This draft guidance outlines a risk-based “credibility assessment framework” for AI models used in regulatory submissions. This expects, sponsors are expected to demonstrate that an AI model is “fit for purpose” for its intended context of use through rigorous validation, addressing risks around bias, reliability, and transparency. This FDA initiative was informed by a 2022 expert workshop and a 2023 discussion paper that gathered extensive public feedback on AI in drug development. Notably, FDA emphasizes that AI-driven evidence (e.g. computational analyses supporting efficacy or safety) must meet the same standard of evidence as traditional approaches, with appropriate validation and documentation.
Beyond drug development, FDA has also pursued AI oversight in related areas: for example, it published principles of Good Machine Learning Practice (GMLP) for medical AI in collaboration with Health Canada and the UK MHRA in 2021. These GMLP principles ?ten high level guidelines – encourage best practices like using representative datasets, ensuring model robustness, monitoring performance post-deployment, and facilitating human-AI team effectiveness . While initially framed for medical devices, GMLP has influenced AI quality expectations in pharma as well. By 2024, FDA’s Center for Drug Safety launched programs to utilize AI in pharmacovigilance (drug safety surveillance) internally, even as it develops external guidance. The 2020–2025 time line saw the FDA shifting from observation to active guidance: recognizing industry’s growing AI adoption and setting out preliminary rules-of-the-road for AI in drug discovery, clinical trial conduct, and regulatory submissions
European Union (EU – EMA and AI Act): Europe has moved toward comprehensive AI regulation that extends into pharmaceuticals. The EU’s proposed Artificial Intelligence Act (AIA), first unveiled in April 2021, reached political agreement by late 2023 and began entering into force in 2024. This horizontal legislation establishes a risk-tiered regulatory framework for AI across all industries. In the context of pharma and healthcare, any AI system used in clinical care or drug development is likely deemed “high-risk,” requiring strict compliance. The AI Act mandates measures such as rigorous risk assessments, high-quality training data free of bias, transparency to users, human oversight of AI outputs, and continual monitoring for high-risk AI systems. For example, an AI algorithm used as part of a drug’s safety monitoring or as a clinical decision support tool must have a documented risk mitigation plan, logging of its operations, and a human in the loop for critical decisions
?The Act also prohibits “unacceptable risk” AI (such as social scoring applications) outright. While the AI Act is broad, the European Medicines Agency (EMA) has been simultaneously honing AI guidance specific to medicinal products. In 2023, the EMA published a draft Reflection Paper on the use of AI in the medicinal product life cycle, which was finalized in September 2024. This reflection paper outlines principles for integrating AI into drug discovery, clinical trials, manufacturing, and pharmacovigilance. It adopts a risk-based approach similar to FDA’s, insisting that marketing authorization applicants ensure any AI tools are “fit for purpose” and comply with existing GxP (good practice) quality standards. EMA recommends early dialogue with regulators for higher-risk AI applications and encourages “explainable AI” whenever feasible. Notably, the EMA paper allows that “black box” ML models may be acceptable only with robust scientific justification and thorough prospective testing (e.g. validating the model on independent data for late-stage clinical use). We also see EMA stressing that AI does not absolve the manufacturer of responsibility – firms must maintain oversight of their AI’s performance throughout the product lifecycle, treating the AI like any critical process that falls under regulatory scrutiny.
In addition to the AI Act and EMA’s guidance, the EU has leveraged existing frameworks such as GDPR (for data privacy in AI training data) and published ethics guidelines for “Trustworthy AI” – these have indirectly shaped pharmaceutical AI projects by emphasizing transparency, accountability, and bias prevention. By 2025, the EU’s dual approach – a sweeping AI Act and sector-specific guidances ?positions Europe as a frontrunner in codifying AI requirements in pharma, from drug development through post-market surveillance.
United Kingdom (MHRA): Post-Brexit, the UK’s Medicines and Healthcare products Regulatory Agency (MHRA) has aligned closely with the FDA and EMA on AI principles. The MHRA co-authored the 2021 GMLP guiding principles mentioned above and has signaled that UK drug regulators will incorporate these principles into review processes. The UK government in 2023 also released an AI regulation white paper advocating a pro-innovation, light-touch approach (in contrast to the EU’s statutory Act), relying on existing regulators like MHRA to issue context-specific guidance. Thus, MHRA’s stance has been to update drug and device guidelines to address AI as needed rather than create a new AI law. In practice, MHRA has been investing in internal AI expertise and participating in international working groups to harmonize standards
Canada: ?Similarly, Health Canada has partnered with FDA/MHRA on AI principles and is exploring requirements for AI in clinical trials and drug submissions. Although Canada and the UK have not issued standalone pharma-AI regulations as of 2025, both are contributing to global frameworks (like the International Council for Harmonisation) and piloting oversight of AI in areas such as algorithm-driven diagnostic tests, which in turn informs drug development use cases.
Japan (PMDA): The Pharmaceuticals and Medical Devices Agency (PMDA) of Japan has approached AI in pharma through guidance and collaboration. PMDA officials have actively engaged in international discussions (e.g. the International Coalition of Medicines Regulatory Authorities (ICMRA) and ICH) about AI’s impact on regulatory science. While PMDA has not yet issued a formal AI guidance for drug development, it has released points to consider for AI/ML in medical devices and has been studying how AI can support regulatory reviews. In late 2023, PMDA echoed the EMA’s AI workplan (2023–2028) and acknowledged the need for alignment in principles. Domestically, Japan’s Ministry of Health has emphasized ensuring AI does not compromise the established standards of quality (akin to Japan’s tradition of device approval emphasizing safety and efficacy). PMDA is expected to incorporate AI validation expectations into existing frameworks such as software validation standards and GCP for trials. In sum, Japan’s regulatory trend is cautious optimism – encouraging AI-driven innovation (as part of the government’s Society 5.0 agenda) but within the boundaries of proven reliability and international norms.
China (NMPA): China’s National Medical Products Administration (NMPA) has rapidly expanded regulation of AI in healthcare, especially for medical devices and diagnostics. By 2023, China had approved over 50 AI-driven medical devices (e.g. AI imaging diagnostic software) under special pathways. For pharmaceuticals, the NMPA in 2021–2025 focused on modernizing its regulatory framework to accommodate digital tools. In 2022, China issued draft guidelines for AI-based medical software, classifying them by risk and outlining documentation needed for approvals.
Although these guidelines target devices, an AI algorithm used in a drug clinical trial would likely fall under similar scrutiny. Importantly, China introduced overarching regulations that impact pharma AI: the Data Security Law (2021) and Personal Information Protection Law (2021) impose strict controls on health data used for AI model training, and new rules (2022) require registration of algorithms that provide “public opinion or personalized recommendations.” In 2023, Chinese authorities (CAC) even released interim measures on generative AI. All these indicate that pharma companies in China must navigate both NMPA’s product-specific requirements and broader AI governance rules. The NMPA is also encouraging AI in drug R&D through initiatives like an “AI Medical Innovation development platform” which summarized key AI guidelines and promoted standardization . By 2025, China’s regulatory landscape for AI in pharma can be characterized as government-steered: pushing the frontier of AI usage (with state-funded AI drug discovery projects and fast-track approvals of AI tools) while simultaneously enforcing state control via algorithm regulations and data laws.
Comparative Summary
Conclusion
The regulatory landscape for AI in pharma is still evolving, but the foundations laid between 2020 and 2025 indicate a shift from passive observation to proactive governance. With initiatives like Good Machine Learning Practice (GMLP) and AI validation frameworks, agencies aim to strike a balance between fostering AI-driven innovation and safeguarding patient safety. While the EU adopts a centralized, statutory approach, the U.S. navigates a complex interplay of federal and state regulations. China, Japan, and India, meanwhile, integrate AI oversight within existing frameworks, emphasizing ethical considerations and data security. These regulatory developments underscore the need for pharmaceutical companies to adopt a flexible, jurisdiction-specific compliance strategy to navigate the global AI landscape effectively.
Key References and reading notes
Disclaimer: The views expressed in this article are personal and do not represent the official stance of any governmental or industrial stakeholders, the author’s employer, or any other official organization. Regulatory frameworks for AI in pharmaceuticals are constantly evolving, and readers are strongly encouraged to refer to the latest regulations, guidance documents, and amendments at the time of reading. This article is for informational purposes only and does not constitute official advice or regulatory guidance. The content has been compiled from publicly available materials, and while every effort has been made to ensure accuracy, the author is not liable for any factual errors, inconsistencies, or omissions.
VP & Co-Founder @ PreciGenome | Lipid Nanoparticle Synthesis Systems, LNP, Digital PCR, Gene-Technology
2 天前Chandramouli, thanks for sharing!
Consultant-Pharmaceutical Research, Nutraceuticals ,Herbal &Cosmetic Products. Advisor: Startups ,Incubation Centres,Contract Research & Dev,DSIR, AYUSH, FSSAI,QMS,Audit &Compliance,Project mgmt,P’Vigilance with AI
4 天前Very helpful AI in All guidance pertains to pharmaceutical industry with a elaborated notes on guidelines of US FDA , Uk MHRA , PMDA of Japan , India and China . Handy tool for easy to access