AI in Pharma: Key Regulatory Developments

AI in Pharma: Key Regulatory Developments

The rapid integration of Artificial Intelligence (AI) in the pharmaceutical industry has necessitated a parallel evolution in regulatory frameworks across global jurisdictions. Between 2020 and 2025, regulatory bodies such as the U.S. FDA, the European Medicines Agency (EMA), and China’s National Medical Products Administration (NMPA) have taken significant steps to establish AI governance tailored to drug development, clinical trials, manufacturing, and pharmacovigilance. While AI promises to enhance efficiency and innovation, regulators have focused on ensuring transparency, reliability, and accountability. The FDA’s 2025 draft guidance emphasizes a risk-based credibility assessment framework, while the EU’s AI Act has set stringent compliance measures for high-risk AI applications in healthcare. Countries like the UK, Canada, Japan, and India have also made strides, aligning with international best practices or formulating country-specific ethical guidelines. Despite efforts toward harmonization, regulatory variations exist at national and sub-national levels, particularly in the U.S., where state-level AI laws are emerging alongside federal oversight.

The FDA has increasingly engaged with AI as its use in drug R&D has expanded. The Center for Drug Evaluation and Research (CDER) reported a sharp rise in drug submissions incorporating AI across nonclinical, clinical, manufacturing, and post-market phases. In response, FDA released a landmark draft guidance in January 2025 titled “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products.” This draft guidance outlines a risk-based “credibility assessment framework” for AI models used in regulatory submissions. This expects, sponsors are expected to demonstrate that an AI model is “fit for purpose” for its intended context of use through rigorous validation, addressing risks around bias, reliability, and transparency. This FDA initiative was informed by a 2022 expert workshop and a 2023 discussion paper that gathered extensive public feedback on AI in drug development. Notably, FDA emphasizes that AI-driven evidence (e.g. computational analyses supporting efficacy or safety) must meet the same standard of evidence as traditional approaches, with appropriate validation and documentation.

Beyond drug development, FDA has also pursued AI oversight in related areas: for example, it published principles of Good Machine Learning Practice (GMLP) for medical AI in collaboration with Health Canada and the UK MHRA in 2021. These GMLP principles ?ten high level guidelines – encourage best practices like using representative datasets, ensuring model robustness, monitoring performance post-deployment, and facilitating human-AI team effectiveness . While initially framed for medical devices, GMLP has influenced AI quality expectations in pharma as well. By 2024, FDA’s Center for Drug Safety launched programs to utilize AI in pharmacovigilance (drug safety surveillance) internally, even as it develops external guidance. The 2020–2025 time line saw the FDA shifting from observation to active guidance: recognizing industry’s growing AI adoption and setting out preliminary rules-of-the-road for AI in drug discovery, clinical trial conduct, and regulatory submissions

European Union (EU – EMA and AI Act): Europe has moved toward comprehensive AI regulation that extends into pharmaceuticals. The EU’s proposed Artificial Intelligence Act (AIA), first unveiled in April 2021, reached political agreement by late 2023 and began entering into force in 2024. This horizontal legislation establishes a risk-tiered regulatory framework for AI across all industries. In the context of pharma and healthcare, any AI system used in clinical care or drug development is likely deemed “high-risk,” requiring strict compliance. The AI Act mandates measures such as rigorous risk assessments, high-quality training data free of bias, transparency to users, human oversight of AI outputs, and continual monitoring for high-risk AI systems. For example, an AI algorithm used as part of a drug’s safety monitoring or as a clinical decision support tool must have a documented risk mitigation plan, logging of its operations, and a human in the loop for critical decisions

?The Act also prohibits “unacceptable risk” AI (such as social scoring applications) outright. While the AI Act is broad, the European Medicines Agency (EMA) has been simultaneously honing AI guidance specific to medicinal products. In 2023, the EMA published a draft Reflection Paper on the use of AI in the medicinal product life cycle, which was finalized in September 2024. This reflection paper outlines principles for integrating AI into drug discovery, clinical trials, manufacturing, and pharmacovigilance. It adopts a risk-based approach similar to FDA’s, insisting that marketing authorization applicants ensure any AI tools are “fit for purpose” and comply with existing GxP (good practice) quality standards. EMA recommends early dialogue with regulators for higher-risk AI applications and encourages “explainable AI” whenever feasible. Notably, the EMA paper allows that “black box” ML models may be acceptable only with robust scientific justification and thorough prospective testing (e.g. validating the model on independent data for late-stage clinical use). We also see EMA stressing that AI does not absolve the manufacturer of responsibility – firms must maintain oversight of their AI’s performance throughout the product lifecycle, treating the AI like any critical process that falls under regulatory scrutiny.

In addition to the AI Act and EMA’s guidance, the EU has leveraged existing frameworks such as GDPR (for data privacy in AI training data) and published ethics guidelines for “Trustworthy AI” – these have indirectly shaped pharmaceutical AI projects by emphasizing transparency, accountability, and bias prevention. By 2025, the EU’s dual approach – a sweeping AI Act and sector-specific guidances ?positions Europe as a frontrunner in codifying AI requirements in pharma, from drug development through post-market surveillance.

United Kingdom (MHRA): Post-Brexit, the UK’s Medicines and Healthcare products Regulatory Agency (MHRA) has aligned closely with the FDA and EMA on AI principles. The MHRA co-authored the 2021 GMLP guiding principles mentioned above and has signaled that UK drug regulators will incorporate these principles into review processes. The UK government in 2023 also released an AI regulation white paper advocating a pro-innovation, light-touch approach (in contrast to the EU’s statutory Act), relying on existing regulators like MHRA to issue context-specific guidance. Thus, MHRA’s stance has been to update drug and device guidelines to address AI as needed rather than create a new AI law. In practice, MHRA has been investing in internal AI expertise and participating in international working groups to harmonize standards

Canada: ?Similarly, Health Canada has partnered with FDA/MHRA on AI principles and is exploring requirements for AI in clinical trials and drug submissions. Although Canada and the UK have not issued standalone pharma-AI regulations as of 2025, both are contributing to global frameworks (like the International Council for Harmonisation) and piloting oversight of AI in areas such as algorithm-driven diagnostic tests, which in turn informs drug development use cases.

Japan (PMDA): The Pharmaceuticals and Medical Devices Agency (PMDA) of Japan has approached AI in pharma through guidance and collaboration. PMDA officials have actively engaged in international discussions (e.g. the International Coalition of Medicines Regulatory Authorities (ICMRA) and ICH) about AI’s impact on regulatory science. While PMDA has not yet issued a formal AI guidance for drug development, it has released points to consider for AI/ML in medical devices and has been studying how AI can support regulatory reviews. In late 2023, PMDA echoed the EMA’s AI workplan (2023–2028) and acknowledged the need for alignment in principles. Domestically, Japan’s Ministry of Health has emphasized ensuring AI does not compromise the established standards of quality (akin to Japan’s tradition of device approval emphasizing safety and efficacy). PMDA is expected to incorporate AI validation expectations into existing frameworks such as software validation standards and GCP for trials. In sum, Japan’s regulatory trend is cautious optimism – encouraging AI-driven innovation (as part of the government’s Society 5.0 agenda) but within the boundaries of proven reliability and international norms.

China (NMPA): China’s National Medical Products Administration (NMPA) has rapidly expanded regulation of AI in healthcare, especially for medical devices and diagnostics. By 2023, China had approved over 50 AI-driven medical devices (e.g. AI imaging diagnostic software) under special pathways. For pharmaceuticals, the NMPA in 2021–2025 focused on modernizing its regulatory framework to accommodate digital tools. In 2022, China issued draft guidelines for AI-based medical software, classifying them by risk and outlining documentation needed for approvals.

Although these guidelines target devices, an AI algorithm used in a drug clinical trial would likely fall under similar scrutiny. Importantly, China introduced overarching regulations that impact pharma AI: the Data Security Law (2021) and Personal Information Protection Law (2021) impose strict controls on health data used for AI model training, and new rules (2022) require registration of algorithms that provide “public opinion or personalized recommendations.” In 2023, Chinese authorities (CAC) even released interim measures on generative AI. All these indicate that pharma companies in China must navigate both NMPA’s product-specific requirements and broader AI governance rules. The NMPA is also encouraging AI in drug R&D through initiatives like an “AI Medical Innovation development platform” which summarized key AI guidelines and promoted standardization . By 2025, China’s regulatory landscape for AI in pharma can be characterized as government-steered: pushing the frontier of AI usage (with state-funded AI drug discovery projects and fast-track approvals of AI tools) while simultaneously enforcing state control via algorithm regulations and data laws.

Comparative Summary

Conclusion

The regulatory landscape for AI in pharma is still evolving, but the foundations laid between 2020 and 2025 indicate a shift from passive observation to proactive governance. With initiatives like Good Machine Learning Practice (GMLP) and AI validation frameworks, agencies aim to strike a balance between fostering AI-driven innovation and safeguarding patient safety. While the EU adopts a centralized, statutory approach, the U.S. navigates a complex interplay of federal and state regulations. China, Japan, and India, meanwhile, integrate AI oversight within existing frameworks, emphasizing ethical considerations and data security. These regulatory developments underscore the need for pharmaceutical companies to adopt a flexible, jurisdiction-specific compliance strategy to navigate the global AI landscape effectively.


Key References and reading notes

  1. FDA (2025)Draft Guidance: Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products. U.S. Food and Drug Administration, Jan 2025. Provides a risk-based framework for establishing AI model credibility in drug/biologics submissions (Considerations for the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products | FDA) (Artificial Intelligence for Drug Development | FDA).
  2. FDA (2023)CDER Discussion Paper on AI/ML in Drug Development. U.S. FDA, May 2023 (revised Feb 2025). Outlines FDA’s observations on AI use cases across 500+ drug lifecycle submissions and sought public feedback for guidance development (Artificial Intelligence for Drug Development | FDA) (Artificial Intelligence for Drug Development | FDA).
  3. FDA/Health Canada/MHRA (2021)Ten Guiding Principles for Good Machine Learning Practice (GMLP). Joint statement by FDA, Health Canada, and UK MHRA, Oct 2021. Emphasizes data quality, multidisciplinary design, transparency, and monitoring for AI in medical devices (U.S., UK and Canada Regulators Collaborate to Develop “10 Guiding Principles” for Good Machine Learning Practices (“GMLP”) for Medical Devices | Covington Digital Health) principles now influencing pharma AI practices.
  4. European Commission (2024)EU Artificial Intelligence Act Enters into Force. European Commission News (1 Aug 2024). Describes the AI Act’s risk-tier system and obligations for high-risk AI (e.g. medical software must have risk mitigation, human oversight, etc.) (AI Act enters into force - European Commission) (AI Act enters into force - European Commission).
  5. EMA (2023)Draft Reflection Paper on the Use of AI in the Medicinal Product Life Cycle. European Medicines Agency, July 2023 (finalized Sept 2024). Advocates a risk-based approach for AI in drug development with requirements to meet GxP, ensure explainability where possible, and validate high-risk “black box” models on independent data (European Medicines Agency opens dialogue on use of AI in pharmaceutical life cycle | DLA Piper) .
  6. MHRA (2021)GMLP Guiding Principles Press Release. UK Medicines and Healthcare products Regulatory Agency, Oct 2021. Announced collaboration with FDA and Health Canada on GMLP for AI/ML medical devices (U.S., UK and Canada Regulators Collaborate to Develop “10 Guiding Principles” for Good Machine Learning Practices (“GMLP”) for Medical Devices | Covington Digital Health), signaling UK’s commitment to aligned AI oversight.
  7. PMDA Updates (2023)PMDA Communication on EMA AI Workplan. Pharmaceuticals and Medical Devices Agency (Japan) News, Dec 2023. Notes that EMA and HMA published an AI workplan to 2028 ([PDF] PMDA Updates), indicating PMDA’s awareness and likely adoption of similar principles.
  8. NMPA/China (2022)Regulatory Frameworks for AI-Enabled Medical Devices. Comprehensive review in NPJ Digital Medicine (Chen et al., 2023) (Regulatory Frameworks for AI-Enabled Medical Device Software in ...) (Use of Artificial Intelligence in Healthcare Industry in Mainland China). Details China’s approvals of AI medical devices and outlines guidelines for classification, registration, and evaluation of AI software, which inform pharma AI tools.
  9. ICMR (2023)Ethical Guidelines for Application of AI in Biomedical Research and Healthcare. Indian Council of Medical Research, Jan 2023 ([PDF] Ethical Guidelines for Application of Artificial Intelligence in ...). Provides 10 ethical principles (like safety, risk minimization, accountability, data privacy) for AI in health – a de facto benchmark in India pending formal regulations.
  10. NCSL (2024)Artificial Intelligence and Health Care: State Policy Primer. National Conference of State Legislatures, Nov 2024 update ( Artificial Intelligence & Health Care: A Primer ) ( Artificial Intelligence & Health Care: A Primer ). Summarizes U.S. state legislation on health AI, including themes of bias prevention, patient notification, oversight, and examples like California AB 3030 and Oklahoma HB 3577.
  11. Sidley Austin (2023)“Pharmacovigilance Must Ready Itself for AI.” Eva von Mühlenen, Sidley Insights, Jan 30, 2023 (Pharmacovigilance Must Ready Itself for AI | Insights | Sidley Austin LLP). Discusses how pharma companies are using AI in drug safety and the regulatory expectations (validation, explainability, integration into PV quality systems) and open questions (data bias, compliance with good PV practice).
  12. Pharmaceutical Executive (2024)“Regulatory Concerns for AI: 2024 Trends.” Mike Hollan, PharmExec, Jan 17, 2024 (Regulatory Concerns for AI: 2024 Trends) (Regulatory Concerns for AI: 2024 Trends). Features interviews with industry experts about the anticipated regulatory landscape: expectation of FDA guidance, need for limits on AI in clinical judgement, privacy/cybersecurity considerations, and continued investment in AI by big pharma.
  13. GSK (2024)“Our Position on Responsible AI” (Public Policy Position). GlaxoSmithKline, Feb 2024 (Our position on Responsible Artificial Intelligence (AI) . Details GSK’s internal AI governance framework, including a cross-functional AI council and five AI Principles (ethical innovation, privacy, robustness, fairness, transparency), aligning with evolving regulatory standards.
  14. WHO (2021)“Ethics and Governance of Artificial Intelligence for Health.” World Health Organization Guidance, Oct 2021 ( Ethics and governance of artificial intelligence for health ). Identifies ethical challenges of health AI and sets out six core principles (e.g. protect autonomy, ensure transparency, promote equity, etc.) to guide governments and stakeholders in AI deployment.
  15. Petrie-Flom Center (2023)“How AI is Revolutionizing Drug Discovery.” Matthew Chun, Petrie-Flom (Harvard Law) Blog, Mar 20, 2023 (How Artificial Intelligence is Revolutionizing Drug Discovery - Petrie-Flom Center)). Chronicles key milestones in AI-driven drug R&D: Exscientia’s AI-designed molecule (2020), DeepMind’s AlphaFold protein breakthroughs (2021), Insilico’s AI-discovered drug entering Phase I (2022) and receiving FDA Orphan Drug Designation (2023). Highlights regulatory acceptance of these AI-enabled achievements.
  16. EU Commission & PharmaManufacturing (2024)Overview of EU AI Act for Pharma. Ellie Gabel, PharmaManufacturing.com, Mar 11, 2024 (Anticipating AI regulations in pharma manufacturing | Pharma Manufacturing). Explains how the EU AI Act applies to pharma (four-tier risk framework) and notes that high-risk AI users (including pharma applications) must implement risk mitigations, human oversight, and log data, with a two-year compliance window once the Act is effective.
  17. RAPS – Regulatory Focus (2024)“FDA modernizing pharmacovigilance oversight with AI tools.” Joanne Eglovitch, Regulatory Focus News, Feb 2024 (Official: FDA modernizing pharmacovigilance oversight with AI tools) (FDA to modernize drug surveillance internally with use of AI). Reports that FDA’s drug safety unit (OSE) is piloting AI to analyze adverse event reports and planning standardization of AI in drug safety monitoring, demonstrating regulators’ own use of AI to enhance post-market surveillance.
  18. Covington Digital Health (2021)“10 Guiding Principles for GMLP for Medical Devices.” Sarah Cowlishaw et al., Oct 29, 2021 (U.S., UK and Canada Regulators Collaborate to Develop “10 Guiding Principles” for Good Machine Learning Practices (“GMLP”) for Medical Devices | Covington Digital Health) . Describes the joint release of GMLP principles by FDA, MHRA, and Health Canada, which cover the AI/ML product lifecycle (from design to post-market) and signal areas like representativeness of data, human-AI team performance, and continuous monitoring – principles now echoing in pharma AI guidelines.
  19. NCSL (2023)“State Legislation on Artificial Intelligence (2023).” NCSL AI Legislative Tracker, 2023 ( Artificial Intelligence & Health Care: A Primer ) . Provides examples of state-level approaches such as requiring AI impact studies, mandating patient notification for AI use in health (citing California’s proposal to disclose use of generative AI in patient communications), and proposals for bias audits in insurance algorithms (Oklahoma).
  20. Insight: Expert Podcast (2023)“Ethical AI in Pharma: How Novartis Leads…” Hogan Lovells Podcast, 2023 (Our commitment to ethical and responsible use of Artificial ... - Novartis) . Features Novartis’ leadership discussing their ethical AI framework and compliance-by-design strategy, noting proactive collaboration with regulators (sharing AI validation data) and the challenges of implementing AI in regulated clinical trial environments.


Disclaimer: The views expressed in this article are personal and do not represent the official stance of any governmental or industrial stakeholders, the author’s employer, or any other official organization. Regulatory frameworks for AI in pharmaceuticals are constantly evolving, and readers are strongly encouraged to refer to the latest regulations, guidance documents, and amendments at the time of reading. This article is for informational purposes only and does not constitute official advice or regulatory guidance. The content has been compiled from publicly available materials, and while every effort has been made to ensure accuracy, the author is not liable for any factual errors, inconsistencies, or omissions.



Sam Y. Ling

VP & Co-Founder @ PreciGenome | Lipid Nanoparticle Synthesis Systems, LNP, Digital PCR, Gene-Technology

2 天前

Chandramouli, thanks for sharing!

回复
Karunagaran Lakshminarayanan

Consultant-Pharmaceutical Research, Nutraceuticals ,Herbal &Cosmetic Products. Advisor: Startups ,Incubation Centres,Contract Research & Dev,DSIR, AYUSH, FSSAI,QMS,Audit &Compliance,Project mgmt,P’Vigilance with AI

4 天前

Very helpful AI in All guidance pertains to pharmaceutical industry with a elaborated notes on guidelines of US FDA , Uk MHRA , PMDA of Japan , India and China . Handy tool for easy to access

回复

要查看或添加评论,请登录

Chandramouli R的更多文章