Keeping Your Eye On The Ball
Arijit Chakravarty
CEO @ Fractal Therapeutics | Model-Based Drug Discovery & Development
How To Get Your Molecule To The Clinic When Money Is Tight
In the biotech/pharma industry, drug development projects hold the power to make or break companies and careers. It’s hard enough as it is to get a molecule to the finish line, but a tight market for funding can make that so much more difficult!
How do we ensure that our candidate molecules put their best foot forward when money is tight? Are there things that we can do to ensure that a limited research budget doesn’t lead to reduced chances of success? In this post, I will share insights from my time as a Discovery Project Lead and as the head of a line function (and later a small company) focused on helping teams design and implement model-based drug development approaches.
Before we dig into the questions, let’s first review the structure of research spending. This article will focus on how to get a molecule smoothly to an Investigational New Drug (IND) application that allows entry into clinical trials. (A subsequent blog post will cover the same topic with a focus on getting to clinical proof-of-concept). For the sake of convenience, we will stick to a single therapeutic area (cancer) and focus on conventional drugs (small-molecules, biologics, antibody-drug conjugates or peptides, as opposed to cell or gene therapies.)
So, let’s talk numbers then. Ten years and a billion dollars out of pocket to bring a drug to market. About half of that spend is preclinical, meaning that a pharma/ biotech should have about half a billion dollars to play with in the preclinical stage to get a candidate drug to the Investigational New Drug application stage. Most molecules fail preclinical development though, so the per-molecule numbers look quite different. A typical pharma or biotech spends about $6 to $8 million to get a molecule to the clinic, a number that’s surprisingly similar across a range of modalities and therapeutic areas.
Well, so what, you might say. A budget of $6 to $8 million still sounds like a tidy chunk of change, right? Here’s the thing- there’s a lot that needs to be done to take a molecule over the finish line. About half of the total cost ($3.8M) goes to figuring out the right formulation and manufacturing the drug product for the clinical trial. Of the remainder, half again ($1.8M) goes for toxicology studies, which are crucial for setting the starting dose, and building an understanding of what clinicians can expect for adverse events in the clinic.
Still, close to $4M for the pharmacology and toxicology work, and close to $2M for the pharmacology alone, that should be enough, right? Trouble is, when you look at the “laundry list” of what needs to go into an IND package, it can be very daunting. There are exploratory studies, and there are GLP (Good Laboratory Practice) studies, often with the same goals. And some of these studies can get really expensive! Crucially, GLP toxicology studies will each cost hundreds of thousands of dollars. All of the budgeting assumes that things work right the first time too- there’s very little margin for error. Suddenly, that budget doesn’t look that big anymore!
A laundry list like this one can create the impression that preparing a drug for IND involves running a set of standard studies, writing up the study reports and performing QC (quality control) on the data. One can come away with the sense that knowing which studies are necessary, and executing them on time and on budget, is what is required for success.
Nothing could be further from the truth.
That IND won’t just write itself
I’ve worked on over sixty drug discovery & development projects, in my two decades spent as a scientist and manager, both in Big Pharma and as an entrepreneur (running Fractal Therapeutics, a model-based drug development firm focused on helping clients design and implement their research strategy). The only consistent thing that I’ve seen about the process is that no two projects are the same. Even when working on the same drug target, therapeutic area and modality, every project has its unique set of challenges.
So, a heavily cookie-cutter approach to IND preparation is almost certain to lead to failure. Not failure to execute on or deliver an IND. That’s the easy part. The failure that arises from an IND constructed on autopilot is more insidious than that. A tick-the-boxes IND can be submitted just fine, and it may even pass muster with the FDA (or it might not, more on that later).
The problem is this: absent a clear and precise understanding of the molecule’s dose-response relationships for efficacy, toxicity, and pharmacodynamics (PD, drug effect on cells), the Phase I/II trial framework forms a poor basis for advancing the molecule beyond the point where the MTD (maximum tolerated dose) has been established. Many molecules fail at the end of Phase I/II, because the clinical trial did not yield enough information to derisk any further investments. (For more on this topic check out my earlier blog post, “Pitfalls in Drug Development”).
It's a commonly held belief that “you don’t need to be smart, you just need to be lucky”, meaning that if you’re lucky and have a large number of responses in Phase I, then it’s all good. While that statement is trivially true, luck is not a strategy.
A thoughtful and scientifically rigorous IND enables your molecule to put its best foot forward during the clinical trial process. And, as an added benefit, it will lead to a far smoother review process with the FDA. Contrary to some perceptions, the FDA can sometimes take issue with the quality of the science in the IND. In particular, if the IND makes it look like the clinical trial is unlikely to succeed, that could be a problem. From the FDA CDER (Center for Drug Evaluation Research) website, italics mine: “CDER supports public health goals by protecting subjects from participating in trials that are unlikely to support approval and by facilitating development so that beneficial drugs are available as soon as possible. Inefficient development may expose clinical trial subjects to unnecessary risk. Traditionally, CDER has used a proactive approach to respond to information provided in meeting packages.” Read between the lines!
Make sure you have a well-designed clinical trial
So, the best way to ensure a smooth path through IND review (and ultimately, a successful or marketable clinical asset) is to make sure that every design choice in the clinical trial is backed up with data. (The consequences of missing the bus on this are explained in detail in my companion blog post “Pitfalls in drug development”). These design choices form the crux of the Investigator’s Brochure (IB) that serves both as a regulatory document (part of the IND) and as a means of attracting prospective clinical investigators.
While every drug development program has its own unique set of challenges, there are also a set of challenges that are common to all programs. Here (in no particular order) is a list of questions that will make it into the IB and are best answered rigorously (think ‘letters’ format scientific paper for each one):
There are standard ways of calculating a starting dose based on (usually rat and dog) toxicology studies. For certain mechanisms (such as immune modulation), simply starting from a dose that is an arbitrary number of times lower than that found safe in animal studies is not enough. There was a high-profile clinical failure due to this in 2006, when a clinical trial of a CD28-modulating antibody landed all six volunteers in the ICU after the first dose. There are better ways to select a starting dose (for example, based on a Minimal Anticipated Biological Effect Level (MABEL) dose where no target engagement is expected). Choosing a safe and scientifically rational starting dose is crucial to program success and examining the mechanism of action closely and choosing the appropriate strategy is a first step. Target engagement-based approaches need a translational PD assay in place at the time of the IND, so you will have to plan ahead! (It’s important to keep in mind that such failures can also occur from off-target drug effects for otherwise “safe” or “conventional” mechanisms of action, as was observed in a 2016 trial that again led to multiple hospitalizations and one death, so a MABEL approach is not a panacea).
Again, the traditional approach to dose escalation (3+3) is inefficient- it can be slow, doesn’t come with a defined and controllable risk of over-dosing, and ironically can lead to a selected maximum tolerated dose (MTD) that is lower than the true MTD. Bayesian approaches, where the dose escalation is dynamically determined based on the dose/PD or dose/toxicity relationships, are far more powerful but require a precise understanding of these relationships going in (We have a white paper on this topic and will expand on it in a future blog post).
This is a really crucial- and often overlooked- aspect of running an effective clinical trial. During dose escalation, understanding the likelihood of observing toxicity at the next escalation point helps the clinical investigators plan appropriately. In the CD28 example above, if the starting dose had been selected based on MABEL, the same PK/PD relationship could have been leveraged in the dose escalation by updating the preclinical projections with clinical data in a Bayesian framework (again, we have a white paper on this topic if you’d like to learn more). Once the MTD is reached, having a clear understanding of where the achieved dose lies on the PK/PD and projected PK/Efficacy relationship can be a game-changer for building enthusiasm for further development of the molecule.
领英推荐
Dose route and schedule can profoundly affect the likelihood of success for a program, but ironically, this is often selected by teams at the very inception of a program. (“A once-daily oral drug will have the commercial advantage of being more convenient than the i.v.-dosed standard of care”). All other factors being equal, convenience is a major driver of commercial success. All other factors are usually not equal though- for serious diseases, a drug that performs better in disease control will often have a better commercial outcome even if it is more inconvenient. Sometimes, the more convenient formulation will simply not be able to deliver a viable drug at all. Sacrificing probability of success for convenience is a poor tradeoff (the market share of a failed drug is always 0%). One way around this is to finalize dose route and schedule selection after the in vivo/ translational pharmacology work has been completed, and to structure that work explicitly around making a projected feasibility assessment of the selected dose route and schedule in the clinical setting.
Biomarker strategy is one place where big pharma and small biotech programs look the most different. Small biotechs will sometimes have biomarker panels that are parsimonious to the point of being uninterpretable, while a biomarker panel from a big pharma program can sometimes be a clown car of unrelated measures whose very abundance makes it possible to point to success by some measure (any port in a storm!). The irony is that both kinds of biomarker panels represent a missed opportunity. The quality of the biomarker strategy can often determine whether or not a molecule makes it to later clinical development. Crucially, second chances are much easier to come by if your biomarker panel can be deployed to make the case for it. The key to a biomarker panel is to know what you are measuring and why. Each biomarker in the panel should be defensible from a mechanistic basis. If you cannot draw a straight line connecting a candidate biomarker to target engagement on the one hand and either efficacy or toxicity on the other, let it go. Even if it doesn’t cost an extra penny, biomarker data that has no interpretability can create trouble for you during clinical development (remember the FDA statement about proactively responding to information provided?). At a very minimum, the biomarker strategy should provide interpretability along the axis of drug action, so that dose --> PK --> selected biomarker --> efficacy (or toxicity) can be understood. When a program reaches MTD and the observed efficacy is "meh", demonstrating strong target engagement at the MTD can be pivotal in supporting an investment in other indications, for example. (The devil's in the details for this though, and careful translational PK/PD modeling can be incredibly valuable to ensure that the timing of the PK and PD samples maximizes your chances of success). Coupling a biomarker panel with PK/PD relationships that are dynamically updated using a Bayesian framework can provide a practically useful dashboard for clinicians to base internal decision-making on during trial execution. This also greatly facilitates rational go/no-go decisions (Check out our website for white papers that dive into this topic more deeply).
False certainty is one of the single biggest drivers of translational failure in clinical trials. Program leads often face pressure to provide a tidy story, with all the loose ends tied up for the IB. Often, this pressure can lead to overpromising. For cancer drugs, for example, this overpromising is most common in indication selection. Programs will justify their choice of indication based on in vitro data, using either biochemical/genetic logic (“my drug targets Oncogene X, which is overexpressed in head and neck cancer (HNC). Patient population is Oncogene X-overexpressing HNC.”) The problem is that such in vitro data almost never pans out. As treatment failure is often driven by the expansion of low-frequency drug resistant clones in a patient’s cancer, the expression status of the majority of cells in a tumor is poorly predictive of outcomes. A better way to approach the problem is to start with an all-comers Phase I/II trial and defer the identification of the sensitive population until after the trial is run. This can be done using techniques like tumor kinetic modeling, which are vastly more efficient at leveraging the clinical data to identify tumor types that are likely to respond. Retrospective analysis of Phase I/II data to formulate testable clinical hypotheses about sensitive patient subpopulations within the selected disease represents an alternative (and less risky) path for indication selection. This approach can be coupled with powerful (but expensive) patient-derived xenograft models in a Bayesian framework, to increase the confidence in the selected indication. (We have a couple of publications on this topic as well as white papers, so ping me if you would like to learn more!) Outside the cancer arena, similar disease-progress modeling, coupled with genetics, can be used to formulate mechanistic hypotheses for novel treatments.
Thus, focusing on the critical questions is vital for conserving resources during the path to the IND. Each of these critical questions can be answered by throwing a dart at a board. Putting together rigorous answers for them, however, will greatly improve your molecule’s chances of making it through the clinical trial process.
Get your in vivo experiments right the first time
At this point, dear reader, you may be justified in saying “but where is the savings going to come from? You said we were going to discuss how to stretch my dollar, and instead you gave me a bunch of complicated suggestions that will lead to more work for me!” Interestingly, the suggestions in the previous section cut down the workload. A sizable chunk of the focus of the program shifts to building a precise quantitative understanding of dose-response relationships for (a focused set of) biomarkers, efficacy, and toxicity. This understanding is then leveraged repeatedly and updated with clinical data as development progresses.
But there’s another way big way to cut costs, and that’s by not having in vivo experiments fail. “Great”, you might be forgiven for thinking, “that’s about as useful as buy-low-sell-high”. Well, actually, there are several practical ways to greatly reduce the odds of failure with in vivo experiments (we will use xenograft studies for cancer as the example, but these tips work just as well for other therapeutic areas):
Begin with the end in mind:?Have a clearly defined objective for each study that you run. So, for example, if you’re focused on establishing the dose-response for efficacy, the study should only focus on a single schedule. Unless the question is “what is the best schedule?”, in which case there’s a study design strategy for that. Avoid mixing objectives, whenever possible, in an in vivo study design. Studies rarely work perfectly the first time, so packing two different objectives into one study will lead to two different directions for followup.
Take it one step at a time: It’s helpful to think of the ‘killer experiment’ as the culmination of a series of investigations, rather than as one grand study. As an example, say you want the killer ‘efficacy’ slide, where the drug causes a regression in mouse tumors. You pick dose, schedule, and xenograft model (tumor type) out of a hat and run a study. Seeing regressions on that first study is exactly like winning the lottery- just as exciting and just as likely. The way to break it down is to run a series of studies, each focused on a separate sub-question: what is the MTD in this mouse strain (that’s two experiments, actually- one for single-dose and the other for repeat-dose)? What is the best xenograft model (this is a set of small pilot experiments, running the MTD dose in several different xenografts)? What is the best dose schedule (this is a trickier question and requires a prior understanding of the PK, but can be addressed in a single efficacy study once the PK is in place, and you’ve modeled it out)? Armed with that information, you can pull together the right dose ranges for the pivotal efficacy study (it’s still a good idea to run a small pilot study ahead of time to understand the right dose levels). Three small studies with fifteen mice each, followed by one larger study with thirty-five mice, or a single large study with eighty mice? The former option wins every time.
Keep your ambitions in check: What looks like a simple study design on paper can quickly turn into a logistical nightmare in the animal room. If the scientist running the study tells you that a study will be challenging to conduct, listen to them. They will, of course, do their best to make the study work, but from my own experience in the mouse room a study that is a headache to run is a study where the dose groups are more likely to get mixed up. A good study size is about 15-40 mice, with no more than 5-6 dose groups. (Pro tip, most people use way too many mice in their study groups as well. Using disease progress modeling approaches for in vivo studies can help you cut your study designs in half without compromising data quality, as we showed in a paper several years ago).
Use model-based approaches: Another big advantage of smaller studies is that you can use powerful model-based approaches (such as D-optimality and PK/efficacy or PK/PD modeling) to guesstimate the results of experiments before running them. This kind of advance modeling of the study results is incredibly useful in slashing the study design- the limited real estate of your experiment becomes heavily focused on those dose groups that can most clearly demonstrate what you’re trying to establish. Modeling ahead of time doesn’t substitute for running the experiment, but it massively derisks the experiment when it is run. An effort of a few days in data analysis can save weeks or months on the timeline as you avoid uninformative in vivo experiments or dose groups.
Know the results of the GLP experiments before you run them: ?The strategies discussed here are just as applicable to GLP studies as they are to exploratory studies. GLP experiments are expensive! So, while it’s tempting to skip straight to the GLP studies or to cut corners get there as fast as possible, resist the urge. A failed GLP toxicology experiment is a catastrophe- in a small biotech the punishment for that can be as severe as a down round of funding, and in big pharma that can mean a missed IND filing milestone KPI. Let’s discuss how to avoid that, using the GLP rat toxicology study as an example. The final toxicology study will often have just three dose groups, with fifty rats in each group (150 rats). “Low”, “medium” and “high” dose groups must demonstrate different toxicities, otherwise the study’s a bust. The temptation is to pick three widely spaced dose groups, but then you run into the problem that the starting dose selected may be too low as a result. The solution to that conundrum is to run the pilot study in a large number of dose groups (say eight dose groups with six rats each) and use a model-based analysis (dose-response curves coupled with a D-optimality analysis to select the GLP toxicology dose levels that meet both criteria with the smallest number of rats. (We have a white paper on this topic. When we have applied this in real-life scenarios, it’s cut our GLP tox study sizes by half or more, while vastly reducing the risk of study failure).
A one-bite-at-a-time approach to the in vivo pharmacology and toxicology, using multiple smaller studies that build on each other, is a proven way to reduce cost and risk during the lead up to the IND. Model-based approaches let you simulate experiments before you run them and focus your money on those dose groups that are most likely to be informative.
It’s not all unicorns and rainbows, though. The devil’s in the details for a model-based strategy, and the sequence of experiments matters. It’s important to work with a modeling partner during your in vivo campaign that can turn modeling analyses around quickly, and that actively provides input into the study design that’s most likely to be successful for model building.
A useful tip when teeing up an in vivo campaign using this approach is to always think about two or three studies ahead, booking time with the CRO (or animal room) and ensure that you have the number of dose groups and animals per group mapped out several studies in advance. It’s a bit of a different way to think about the process, but once you get the hang of it, it gets a lot easier!
Lose the fluff
Another major area of potential savings en route to an IND can be a bit unintuitive: experiments that cannot directly support the IND or IB should be triaged ruthlessly. What makes it tricky is that many of these kinds of experiments are precisely the kind that Ph.D. biologists would find exciting if they read them in a paper. Even though these experiments make for “good data” in that Nature or Science editors may find them exciting, the issue with them is that their translational relevance is unknown. In other words, a negative result cannot be used to kill a program, and that’s a problem because it makes the positive result a lot less exciting for folks who are seasoned in drug development (and not surprisingly, for the FDA as well). Here are some examples of these kinds of experiments:
Unvalidated in vitro or in vivo model systems: Validated model systems show concordance between preclinical and clinical outcomes. This concordance has been established, for example, for xenografts (in different ways for traditional and patient-derived xenografts). Some “exciting” model systems (such as GEM models for cancer) show very poor predictive value in retrospective studies. So run them if you have investors (or senior management) that insists on seeing the data, but the FDA views these models very skeptically, and you should too! The same logic holds for other types of models such as orthotopic models (in vivo) or hanging drop or 3D models (in vitro). Which is not to say all innovative model systems are worth avoiding- far from it, but if it lacks a rigorous dataset showing concordance between preclinical and clinical outcomes, you won’t be able to leverage it in the IND package. (The examples here were focused on cancer, but this is a frequent problem in other therapeutic areas, such as GI and CNS as well).
Combination studies: It’s often tempting to broaden the data package for a molecule before it enters clinical trials. Combination studies can be an easy way to do that- the efficacy seen in preclinical models with your candidate drug plus another agent is always going to be better than the efficacy seen with your candidate alone (or at least, it won’t be any worse). The problem is that one-off combination efficacy data reveals almost nothing about the combination potential of two drugs in the clinic (which is usually limited by toxicity). There is a systematic way to build support for a drug combination that is entering the clinic (we have a white paper and other publications on that topic), but it’s not trivial. So, if the clinical trial doesn’t specifically have a combination arm, then combination data can be a distraction.
Systems biology: These kinds of studies, coupling high-throughput datasets with mathematical modeling, are much beloved by the one-syllable journals, but have minimal translational predictive value (and hence minimal utility) in setting up a first-in-human trial. They’re also expensive, and they can be a rabbit hole in their own right. While they have their place in academic science, a drug discovery and development program will rarely benefit from studies of this nature, unless it serves the purpose of building enthusiasm among investors or senior management. Given that there are ways to build excitement around a program that also align with an increased probability of success, this is one category of work to consider trimming when times are tough.
One category of studies that didn’t make this list? Mechanism of action (MoA) work. Careful mechanistic studies in a validated model are not fluff, especially if they can be used to tie the in vitro, in vivo, and clinical pharmacology together systematically. For example, a number of years ago, I was part of a team responsible for identifying the cell biology underpinning the MoA of a novel anticancer agent. Switching departments, I then led the design and deployment of novel pathway biomarkers that leveraged this unique MoA - we implemented in vivo assays to demonstrate the MoA in xenografts and applied the assay to predict human efficacious dose and demonstrate clinical Proof of Mechanism. The work also provided insights into the development strategy, suggesting combination therapies and dosing schedules that we then went on to examine in the clinic. Teasing out the MoA and demonstrating the biology from the petri dish to patients is translation at its most fundamental. That kind of work is expensive, though! The approaches we discussed in this article represent a cheaper and more reproducible way of taking a molecule to the finish line.
Model-based drug development is faster and cheaper
The take-home from this post is that focusing ruthlessly on the specific questions that need to be answered for first-in-man trial design is the secret to doing more with less in preclinical drug development. The questions on the FDA’s critical path for clinical trials are all tractable to modeling. The FDA understands this language well, routinely using M&S in their own evaluations, and the agency regularly puts out publications and white papers on this topic. (Worth a read, if you’re looking for more information on this topic).
Model-based approaches can save money and time in several different ways, as we’ve discussed in this post. Crucially, much of the investment in supporting modeling also leads to better and tighter translational pharmacology and toxicity. Even if you never show a single PK/PD model in your IND package, a model-based approach will still yield data slides that show dose responses more clearly, leading to cleaner and more comprehensive answers for questions related to first-in-man clinical trials.
I'm passionate about this topic, because I believe that model-based approaches hold tremendous potential for cost-effective and rigorous drug development that brings better treatments to patients faster. So, I’m happy to share information - our webpage: https://fractaltx.com, has a range of white papers and publications that you can dig into if you’re looking for more. Alternatively, just ping me on LinkedIn, I’m always open to “talking science” about how these approaches can be used to make better drugs faster!
Passionate about bringing life-changing medicines to patients in need.
2 年I enjoyed reading your blog!