Generative AI and how we can harness its power in clinical development
By Thomas Pietsch , Global Head of Scientific Data Technology and AI
Between 2016 and 2020, the FDA’s Center for Drug Evaluation and Research (CDER) received only a handful of submissions that included artificial intelligence (AI) or machine learning (ML) — just one submission in 2016, and fourteen in 2020. But in 2021, the number of AI-inclusive submissions swelled to 1321.?
Of course, as the FDA itself reports, this almost certainly underestimates the true number of applications of AI in the life sciences sector, since the uses extend far beyond the submissions of NDAs and BLAs. And this remarkable leap happened about one year before the emergence of ChatGPT, which is further accelerating the era of generative AI.
This is truly a watershed moment for everyone with an interest in data and technology, many of whom believe that AI has the potential to change the accepted paradigm for drug discovery and development.
Increasing awareness of AI
AI is not a new technology. Its origins probably lie in the 1950s and since then there has been growing interest in AI innovation. Within life sciences, we’ve seen it used quite extensively in target discovery, lead optimization and the development of companion diagnostics.
The emergence of generative AI as an open, foundational technology has lowered the barrier to development of new applications that leverage readily accessible large language models (LLMs) as well as audio- and image-based models. This has caused a reset in thinking about how AI can be applied. It has also unquestionably raised the profile of AI among legislators, regulators and the general public.
What is AI?
AI includes several different but related elements. As the figure (below) shows, machine learning is a type of AI. Natural language processing (NLP), which is the ability of a computer to interpret text, may rely upon ML. Generative AI, as the name suggests, can process (i.e., interpret) text; but it can also create new text, allowing applications like ChatGPT to reply to user queries with human-like responses. Large language models are generative AI technologies that have been trained on massive amounts of text information, such as the entire publicly visible internet.
AI in clinical development
There’s still much to learn about how to get the best from LLMs, but it’s already evident that they can be especially useful to drug developers in three broad areas.
Search-and-retrieve. By leveraging proprietary information and data, AI can help businesses quickly provide their staff with the contextualized information they need to perform their roles. For instance, a suitably engineered app can answer questions about business processes by retrieving the details of a standard operating procedure (SOP) document or process map. The ability of the technology to “chat” with users also allows them to ask follow-up questions, which the technology will recognize as part of a continuing dialogue.
Content generation. LLMs are perhaps best known for this capability and can provide users with good first drafts of original content by transforming existing text. For instance, medical writers may ask an LLM to change the tense of a section from a clinical study protocol, making it more suitable for inclusion in a clinical study report. While this computer-generated text will need review and editing, the opportunities for timesaving and avoidance of cut-and-paste errors are clear. LLMs can also offer style transformation, recreating a document in a new tone or authorial voice to make it more conversational, for example.
Workflow automation. By embedding LLM technology in workflows, users have the potential to significantly speed up important, high-volume tasks. For example, while it isn’t new to use AI tools for safety and pharmacovigilance (PV) case processing, LLMs can further enhance this process. Likewise, AI can support PV literature reviews and expectedness processing, helping reviewers to better categorize adverse events. With access to the right data, LLMs can also support study feasibility assessment by shortlisting sites and investigators based on experience, capabilities and performance history.
领英推荐
AI at Parexel
For the last few years, Parexel has been investing in the development of AI solutions for clinical development. This includes the 2020 acquisition of a health care software startup specializing in NLP-based AI. Today, that group is the heart of a large and still growing team that aims to create efficiency gains, drive productivity and enable staff to focus their attention on high-value activities. With the emergence of LLMs, we have doubled down on our investment in AI, pursuing multiple opportunities with the potential to unlock significant value.
As we do so, we’re also building an ecosystem of partners and collaborators, enabling access to a state-of-the-art data science platform and associated toolkit, as well as additional capacity and complementary capabilities through our alliance with data and AI specialist Partex. We’re also continuing to grow and optimize our base infrastructure.
Most recently we have released a company-wide chat-based AI technology (ParexelGPTTM). In the first six weeks of use, this secure app processed more than 100,000 requests from users with uptake accelerating.
Building on ParexelGPT as our foundational chat-based technology, we have a roadmap that aims to deliver one or two targeted solutions each quarter. These solutions will help enhance efficiency and maintain or enhance quality for routine tasks and problems, boosting our team’s capacity to reliably deliver quality results.
Additionally, we have a fast-growing funnel of more sophisticated solutions that leverage the same underlying technology to target more complex problems, such as study design optimization. With a significant number of proof-of-concepts underway and even more apps in the planning stages, we’re investigating ways that AI can support programming, medical writing, site monitoring and audit trail review.
Responsible AI
All Parexel AI solutions are and will continue to be designed to support human decision-making. We create trustworthy tools that help our teams work more efficiently using the right data. For the foreseeable future, we will focus exclusively on human-in-the-loop models that rely on human judgement to mitigate risk of computer error.
Beyond this, we are designing a framework to ensure our AI solutions are conceived, developed, deployed and maintained responsibly. Today we have an AI Working Group that helps feed in requests from the business, a Steering Committee to provide governance, and a Center of Excellence to foster best practices. We have established guidelines that detail expectations for responsible use of AI and compliance with applicable laws. We believe that our focus on AI security and compliance will also build credibility with collaborators, sponsors, regulators and patients. And we will evolve and strengthen this framework to ensure we keep pace with emerging best practices.
Above all, every AI app must help us streamline and strengthen the practice of clinical research. We’ll never implement a technology solution simply because it is generating buzz. Even the most promising technology must prove its value within drug development, so we’ll continue to pursue solutions that contribute to our ultimate mission: getting medicines that matter to the patients who need them.
Curious how AI applications can benefit your clinical trials? Our experts are always available for a conversation. Contact us at the link below:
References
Full Stack Software Engineer | React JS | Former Life Science/Biotech Researcher | Consultant @ Capco
8 个月Hi Thomas Pietsch, I appreciate you highlighting the value of LLMs in drug development. The clear delineation of principles for responsible AI use is particularly valuable. As a former CRC and CRA, turned developer, I already envision many opportunities such as rapid access to accurate information, automation of manual paper-based processes, an expedition of content generation (i.e. patient recruitment material, SOPs, etc),?and enhancing overall efficiency and effectiveness. Currently, there's a challenge with LLM's susceptibility to generating health disinformation across many areas. The lack of adequate safeguards against the spread of health disinformation in LLMs combined with developers' lack of transparency in their risk mitigation strategies pose a significant challenge in preventing the spread of health misinformation.? Questions:? 1. What are common transparency issues among AI developers regarding risk mitigation for health disinformation?? 2. How can we enhance our risk mitigation measures to improve the safety/reliability of LLMs in the context of health information?
B. Pharm Student | Actively looking for jobs in Clinical Research and Pharmacovigilance.
8 个月Truly said , AI can revolutionize drug discovery and development.
M Pharm Fresher| Ensures Regulatory Compliance & Project Excellence | Actively Seeking Entry-Level Roles in CROs/ Pharmaceutical Companies
8 个月Parexel leading responsibly
OK Bo?tjan Dolin?ek