The Curse of the Pequod: How to Sink Your Drug R&D Program
Arijit Chakravarty
CEO @ Fractal Therapeutics | Model-Based Drug Discovery & Development
You guessed right, this is a Moby Dick allusion. Most of us are pretty familiar, at some level, with the story. The hunt for the white whale is a tale of obsession and ruin. Captain Ahab, with his singular fixation on Moby Dick, pursued his target relentlessly across the seas, disregarding the warnings of others, and the risks to his boat, the Pequod, and its crew. Ultimately, Ahab’s fixation with a bad idea cost him his life.
Drug discovery and development can feel like Ahab’s hunt: a long and perilous journey chasing success that lies just over the horizon. In pursuit of curing diseases and saving lives, scientists and companies often fixate on a single mechanistic hypothesis — a “white whale” they believe will unlock the mysteries of a condition and lead to transformative therapies. But, as with Ahab, such a fixation can sometimes sink the entire endeavor.
At a deeper level, the hunt for the whale can also be seen as a metaphor for an epistemological quest — "man’s search for meaning in a world of deceptive appearances and fatal delusions*.” In this article, we will discuss how force-fitting a mechanistic narrative, or following the wrong mechanistic hypothesis, can and often does lead a drug discovery and development program to its (metaphorically) watery grave.
?
The Mechanism of Action Can Be Really Useful in Drug Development?
Before we go any further, though, it’s important to get one thing out of the way. Knowing the Mechanism of Action (MoA) of your drug is a good thing. If you have the correct MoA in hand, that can be incredibly valuable for a drug development program.?
To gain regulatory approval, a drug must show efficacy in treating the targeted disease or condition. The main goal of Phase II trials is to establish a clear and convincing link between the treatment and clinical improvement. Efficacy is shown by demonstrating that the observed benefits in the treatment group are not only statistically significant but also causally related to the intervention itself. This requires ruling out alternative explanations, such as chance, bias, or confounding factors, so that you can confidently attribute the observed disease modifying effects to the drug.
Now, in a perfect world, a well-designed trial — sufficiently large, randomized, and representative — should provide robust evidence of causation, making it clear that it’s the intervention that’s driving the outcomes. In the real world, given the constraints of clinical studies, things don’t always work out so cleanly. Here’s where a solid understanding of the hypothesized mechanism of action can add substantial credence to the clinical results. When the mechanism is well-established and biologically plausible, it lends credibility to the efficacy claim. To the extent that a project team has a good handle of the MoA, it can also be incredibly useful in terms of guiding thinking around hypotheses to be tested around combinations, dose schedule and indication selection. A valid MoA, leveraged thoughtfully in the decision-making process can make the difference between success and failure for a program.?
Currently, mechanistic evidence is often treated inconsistently during drug approval processes. Phase III trial data are meticulously scrutinized, while discussions of mechanisms tend to occur informally during approval meetings, relying heavily on expert opinions without a standardized framework for assessing their relevance. This ad hoc approach underestimates the importance of mechanisms in the overall evidence base.
All of that is a best-case scenario, which assumes that you have the right MoA in hand. But a lot can go wrong — and often does.
??
When Does the MoA Fail to Deliver Value?
Mechanism of Action (MoA) failures represent a subtle, but surprisingly common, failure mode in drug development. While mechanistic insights can guide development, they often falter for one of the following reasons:
Ways in Which Mechanistic Data Can Sometimes be Wrong
The idea of mechanistic data being wrong opens up a rabbit hole, unfortunately, but it is sometimes possible. There are several ways in which might come to pass, and it’s worth touching on them briefly.
On the whole, talk about a lack of reproducibility veers in one of two directions — either people tend not to think about it, or they take a position of epistemological nihilism (“you can’t trust any of the preclinical research, so let’s just take it into the clinic and see what happens”). Both approaches are flawed, as we will discuss in depth later. A better approach is to build a mechanistic understanding from papers spanning multiple labs and using a variety of different experimental techniques. Then, once you have a working hypothesis of your mechanism, focus in-house experimental efforts on confirming the aspects of the MoA that are critical for the development strategy. This approach is not a panacea — it won’t help you in situations where the field as a whole is working with a wrong mechanistic paradigm. But it will reduce your risk exposure when it comes to most of the other potential sources of wrong mechanistic information.
Before we go further, it’s worth pointing out that wrong turns with MoA can happen to anyone. The reasons behind a failure of MoA are complex and subtle, and some of it — like Moby Dick and Ahab — is deeply embedded in human nature.
At its heart, constructing a mechanistic narrative is an epistemological quest, driven by a desire to build a clear and rational picture from (often messy) biological and clinical data. This is a noble and often critical activity, but it can also be a trap in certain situations. Let's dig in a little more to see how that happens.
Belief And Knowledge Are Different
We often think of biology and medicine advancing in a steady, linear fashion, with each new discovery adding to a solid foundation of knowledge. But, in reality, the path of progress is rife with detours and dead ends. Many beliefs that were once celebrated as breakthroughs would now seem strange to us — the idea that neurons formed a continuous network throughout the body (1906 Nobel Prize for Medicine) or the idea that lobotomies are a “cure” for depression or anxiety (1949 Nobel Prize), for example.
In the moment, though, widely held beliefs are indistinguishable from knowledge. Often, it is these broadly accepted beliefs that bear the most scrutiny, much like the sentence “it is widely known that” in a paper (the one that usually ends without a reference to support it).?
An example of a widely held, but misleading, belief in biology that has direct implications for drug discovery and development is the idea that antimitotic drugs (for example microtubule targeting drugs such as taxanes, and those targeting spindle proteins, such as Aurora and Polo kinase) cause mitotic arrest followed by apoptosis. While this is often the case in hematological tumors, live-cell video microscopy studies in the early 2000s overturned this simplistic picture in solid tumor cells. Antimitotic drugs cause a transient mitotic arrest in many solid tumor types, followed by heterogeneous terminal outcomes (death or stable cell-cycle arrest). It was also shown that even transient mitotic arrest can lead to the induction of DNA double-strand breaks, which profoundly impair tumor cell viability. If you’re not aware of the fact that both prolonged mitotic arrest and apoptosis are neither necessary nor sufficient for mitotic disruption to cause a loss of tumor cell viability, you could draw some profoundly wrongheaded conclusions about how to develop antimitotic agents (as can be seen in this review, which jumped off the deep end on precisely this point, failing on several counts to grasp the lack of mechanistic significance of mitotic arrest, and teeing up straw man arguments such as: “studies have shown that inhibitors targeting mitosis cannot be expected to stabilize tumor growth by arresting cells in mitosis for prolonged periods of time”).
If you start from the wrong set of assumptions, you will eventually make your way to an incorrect inference. In the case of antimitotics, the belief that they exert their effect through interphase microtubule disruption has become fairly widely held in the community (despite the absence of toxicities in non-dividing tissues for mitotic kinase inhibitors).
Belief and knowledge are not the same thing: widely held beliefs can still be (and often are) wrong. This has significant implications for the utility of mechanistic work.
?
The Seductive Promise of Mechanistic Certainty?
Most scientists in drug discovery and development come from a biology background, with Ph.Ds. in reductionist fields (such as molecular biology, biochemistry, or genetics).
Things get really tricky when working on a new program, because the program is rarely in the exact field that we did our Ph.D. in. (Speaking for myself, I’m a biochemistry Ph.D. who studied mitotic spindle assembly, can you tell?)
As such, we are trained to think in terms of a stepwise linear pathway of events:
From my own experience working as a project lead for multiple programs in preclinical and early development, there is a real danger of latching on to this search image too early. Our brains are wired to seek certainty, which is a quality that is in short supply for drug discovery programs based on emerging science.
So, we jump into a new subfield, feet first, and there’s a stack of papers that in front of you (literally or metaphorically speaking). And then, in the middle of all that sensory overload about proteins that you’ve only vaguely heard of before and findings that seem to contradict each other in model systems that are all subtly different from each other, along comes a figure like this:?
And of course, suddenly everything falls into place. There’s a logical structure, here is a cascade of oncogenes that all phosphorylate each other in a neat linear sequence. It all makes sense now. Ras overactivation leads to Raf hyperphosphorylation leads to Mek hyperphosphorylation leads to Erk hyperphosphorylation.
Using this paradigm to read papers on the Ras/ Raf/ Mek pathway can make everything seem very straightforward. The MoA outlined here can guide patient selection, dose scheduling, combination therapies and so much more!
There’s just one small catch. It’s wrong.
Not wrong in the sense that every single thing about it is wrong. If you inhibit Raf signaling in a cell line that’s expressing pMek, pMek signaling will be inhibited as well, just like the diagram says.
But pretty much no cell line or tumor will ever show all four of these proteins constitutively hyperphosphorylated, because tumors in tissue culture and in the clinic are constantly evolving. This staggering heterogeneity is the signal, not the noise, in clinical cancers. Figures like this depict “frankencells,” cobbled together from a vast array of published papers that usually focus on one or two steps in the cascade in a handful of cell types. While the individual papers are sometimes (but not always) valid, the aggregate picture is misleading. If you try to reproduce that work in-house, with cell lines or in vivo, you will find that it doesn’t pan out. (These negative results, from my own experience running in vitro and in vivo pharmacology teams working on this pathway, are not publishable!) To make matters worse, the last step in that sequence (from Erk to “growth, proliferation and survival”) is vague. While Erk inhibition does kill and arrest cells, that outcome has been demonstrated to be associated with lesions that are known to be lethal to cells (such as DNA double-strand breaks and mitotic disruption). “Pro-survival signals” are as useful in a mechanism of action as they would be in day-to-day life. We don’t say “don’t damage your brain, it’s sending pro-survival signals.”
None of this is to argue that Raf lacks value as an anticancer target, or that Ras-Raf-Mek-Erk signaling is not a thing, and it certainly doesn’t argue against the use of MoA in drug development. But the “oncogene addiction” paradigm that this mechanistic diagram outlines has not borne out useful insights in clinical development. Just for this one pathway, hundreds of programs (and many billions of dollars) have been poured down the drain in the labs of Cambridge alone. While Raf inhibition has been successful, Ras, Mek and Erk inhibitors were never approved clinically. Patient selection based on the simplistic view of MoA that the oncogene addiction paradigm promises has not succeeded either, despite review papers promising us that it was just around the corner for decades now. (Incidentally, one can argue that the “frankencell” problem is particularly rampant in Nature Reviews papers, which seem to specialize in promoting a false sense of certainty around mechanism of action).
In neurodegeneration, the beta-amyloid hypothesis of Alzheimer’s disease has its origins in the discovery of the disease over a hundred years ago, dating back to an era when scientists believed that the structure of the brain determined its functions. This theory posits that accumulation of beta-amyloid plaques is the primary driver of neurodegeneration. The hypothesis has been controversial for decades, and hundreds of clinical trials based on the beta-amyloid hypothesis have failed. Approved therapies based on the hypothesis have shown incremental benefit, at best, with significant toxicities. Many drug candidates have shown a reduction in beta amyloid plaques without demonstrating an efficacy benefit, and the underlying mechanistic hypothesis has grown increasingly complex. Making matters worse, the field as a whole has been plagued by findings of fraud. Still, the Alzheimer’s foundation has maintained a rigid belief in the validity of the mechanistic hypothesis, billions of dollars in NIH funding are awarded to research focused on the hypothesis, and clinical trials based on the beta amyloid MoA crop up every year. The mechanistic hypothesis is in trouble, and a dogmatic belief in it is likely holding back Alzheimer’s Disease research. But you might not realize that if you don’t read the coverage with a critical eye!
?
Linguistic Epicycles: Hallmarks of a Paradigm in Crisis
For both the Oncogene Addiction and the Beta Amyloid hypotheses, new studies have tended to lead to a subtle reframing of the hypothesis, rather than confirming it. For example, when patient selection efforts that flowed directly from oncogenic pathways failed to show clinical benefit (e.g., Mek overexpressing tumors don’t show increased clinical benefit with Raf inhibitors), the mechanistic narrative changed. Some would argue that the “oncogenic addiction” was context dependent, others spoke of “oncogenic shock,” and still others spoke of “oncogene amnesia.” Over time, the crystal clarity of the original hypothesis has become muddied with caveats. As each new finding undermines a different tenet of the mechanistic hypothesis, we can expect to see the hypothesis be reframed.
领英推荐
Similarly, with the Beta Amyloid hypothesis, their relevance continues to be argued for by proponents, with the idea morphing from the “plaques are causal in neurodegeneration” to the “plaques are associated with neuroprotection.”
These mechanistic “epicycles” serve to keep the hypothesis alive but continue to add complexity to the original mechanistic hypothesis. Adding (metaphorical) epicycles is a sure-fire way to ensure that your theory doesn’t hold up to?Occam’s razor.
In Thomas Kuhn’s seminal book, “The Structure of Scientific Revolutions,” he describes a "paradigm in crisis" as a scientific framework or model that no longer adequately explains observed phenomena or resolves critical problems in its domain. There are some tells that can clue you in to when a proposed MoA, no matter how widely accepted by practitioners in the field, is a paradigm in crisis:
?1.???? Accumulation of anomalies: Anomalies pile up, and persistent issues crop us that the canonical paradigm cannot explain. These anomalies are not easily dismissed as experimental errors or minor deviations.
2.???? Loss of predictive power: As new data shows up, the paradigm itself is updated. In other words, predictions flowing from the paradigm are wrong, and the paradigm is changed to accommodate new data.
3.???? Special pleading: The paradigm is ‘patched up’ with assumptions or ad hoc modifications that amount to case-by-case reasoning or special pleading. The simplicity of the original paradigm is lost in a thicket of exceptions.
4.???? Decline in confidence: There are rumblings of skepticism about the paradigm's validity. You usually have to go digging for these in second or third-tier journals, as Nature and Science will remain committed to the orthodox view to the bitter end.?
5.???? There are other explanations: Alternative frameworks begin to appear, often driven by new ideas, methodologies, or technologies.
Eventually, the crisis resolves with the emergence of a new paradigm, but this process can take decades. (Kuhn’s book is a real page-turner, and well worth the time if you’re looking for a relatively quick read that makes you think about the way in which we learn about things in science.)
What does this mean for those of us engaged in or using mechanistic work in drug discovery and development? Academic scientists make their homesteads on a patch of intellectual ‘land’ and farm it their whole lives. Those of us in industry, on the other hand, are nomads. New projects involve new diseases, new pathways and a fresh stack of papers to master. We rely on the literature published almost entirely by academic scientists in order to find our way around unfamiliar lands. The paradigm is the implicit worldview that influences how data is interpreted in a field.
Learning to be able to spot the tells of a paradigm in crisis is a valuable skill in this context. Coming back to our original example — say you’re learning about a new field, and the first paper you read (the Nature Reviews paper, of course!) makes it all look very straightforward. Then you read a set of reviews (in lower-ranked journals) and you notice that there are subtle tweaks to the original mechanism, and the experimental papers are even more contradictory. If you hear it said that microenvironment and context are important in understanding the MoA, your antennae should go up. You might be looking at a paradigm in crisis.
?
The Devil is in the Details When It Comes to MoA
Wrong data, paradigms in crisis, false assumptions. It all sounds a bit discouraging.
At this point you might find yourself thinking, “so then why not just put it in patients and see what happens”? Well, it turns out that this is a bad idea (too)! The big problem with an excessive focus on empiricism, especially when it’s conducted in an ad hoc way, is that you might not end up learning anything at all from the clinical trial if it fails. Epistemic nihilism is self-fulfilling.
?Both during preclinical and clinical drug development, there are many choices that need to be made (dose route, schedule, indication) – these choices are easy to make, but difficult to get right. We’ve discussed before that being able to rationally make those choices is a big part of what drives the difference between the success and failure of programs.
The success or failure of a drug ultimately hinges on the Therapeutic Index (TI) — the ratio between a drug’s toxic dose and its effective dose. The make-or-break question during development is: at the Maximum Tolerated Dose (MTD), is there enough drug at the site of action to inhibit the target sufficiently to cause disease modulation. Because the TI is determined by the choices made during development, it is critical to be able to deconstruct the impact of specific choices on the TI, and breaking down the action of the drug into steps makes this process easier. Setting up a pharmacological audit trail that links dose to pharmacokinetics (PK) to pharmacodynamics (PD) is a key piece of what drives rational decision-making during development. (See this article for more on that topic.)
As long as a PD biomarker is on the causal chain between target inhibition and disease modulation, it retains its utility in the audit trail. As you can see from the diagram, the PD biomarker in the top panel must be modulated (at least to some extent) in order for the downstream effect to kick in. In the bottom panel, this is not the case.? We know this part intuitively. First-generation antihistamines such as diphenhydramine block histamine and relieve itching, but they also make you drowsy. Second-generation antihistamines such as loratadine (Claritin) revolutionized the treatment landscape when it was released in 1993, as it was non-sedating and safe, while still being highly effective against allergies (wide therapeutic index). So, if you used sedation as a biomarker for antihistamine activity, you might have been able to make rational decisions for diphenhydramine. In particular, if you took Benadryl and you weren’t feeling even a bit drowsy, chances are you might not have taken enough. On the other hand, the sedation was not causal for antihistamine effect, so the lack of sedation with Claritin could not be used to infer a lack of effect. The context matters, and non-causal biomarkers aren’t really all that useful.
Now here’s where things get interesting: a short while back, we discussed the Ras-Raf-Mek pathway, and we talked about how the pathway was essentially aspirational, as the oncogenic “addiction” that the mechanism promised never bore out in practice. Because tumors are not actually “addicted” to the pathway, the inhibition of downstream pathway biomarkers in the cascade (for example phospho-Mek (pMek) or phospho-Erk (pErk)) is not sufficient to guarantee efficacy in a tumor cell line, xenograft model or patient.
But Ras-Raf-Mek-Erk signaling is a real thing — Mek and Erk lie on the causal chain for Raf signaling. In other words, the inhibition of the downstream markers is necessary for efficacy. This is a subtle but critical point — necessary but not sufficient biomarkers still provide useful information. If you have a Raf inhibitor that fails to inhibit pMek or pErk at clinically relevant concentrations, that inhibitor is unlikely to show robust efficacy in the clinic. So, such biomarkers can still be used to set up go/no-go decisions in the clinic.
?It may sound contradictory at first blush to say that the Oncogene Addiction hypothesis has failed to deliver on its promise, but biomarkers based on the Ras-Raf-Mek pathway can still provide utility in clinical development. The nuance is important, though. The key to understanding this discrepancy is that tumors are not addicted to Raf — they don’t require it for survival. That said, if you're trying to develop a selective Raf inhibitor and it fails to inhibit the pathway in patients at the MTD, you're not going to see efficacy.
The “biological” view of the pathway on the left is based on a wrong mechanistic hypothesis (“Oncogene Addiction”). Tumors that are sensitive to Raf inhibition (which may or may not be the ones overexpressing Raf) can quickly evolve to develop resistance to it, making Raf overexpression a poor basis for patient selection. On the other hand, the “pharmacological” view of the pathway on the right is using the downstream biomarkers to infer the extent of pathway inhibition. This can be incredibly useful during development, provided it is interpreted with care. This is a subtle point, but a crucial one. As long as a biomarker is on the causal chain, even if it’s necessary but not sufficient, it can provide useful information. Rigor and caution can draw actionable insights where false confidence might sink a program.
A Systems Model of a Wrong Mechanism is a Wrong Systems Model
At Fractal Therapeutics, we focus on model-based drug development, so at this point you’re probably expecting me to say, “luckily, mathematical modeling can fix your problems for you!”
I’m sorry to inform you that a pig with lipstick on it is (checks notes) still a pig. Getting the mechanism wrong, as we’ve seen earlier in this article, can tee up a set of faulty assumptions that then drive the wrong decisions. Even a subtle difference in the understanding of how a mechanism works (for example, viewing Raf as an essential gene whose inhibition can kill some tumors versus an oncogene providing essential “growth signals” to “addicted tumors”) can move a putative MoA from the asset to the liability column for a drug development project.
It goes without saying, then, that large systems models of mechanisms should be approached with extreme caution during drug discovery and development. Many such models are built with correlative data (usually big data), which lack the ability to assign causality to elements in the model and can sometimes be developed entirely from in vitro datasets. Such complex “mechanistic” models can often make great Nature or Science papers, but they are not suitable for decision-making in the real-world setting of drug development. Using such models, even as window dressing, can tee up inaccurate expectations for a program and drive teams to make mistakes in their development choices. Some models are built in a high-throughput fashion with “frankencells” cobbled together from published literature. In much the same way as Nature Reviews articles, these models cause more problems than they solve!
Done right though, model-building can be very useful as an exercise when working with a putative MoA, especially if the process of model building clearly identifies the assumptions underlying the MoA and surfaces them for discussion. Simply articulating each assumption explicitly and identifying the specific data that supports each assumption is a good way to understand the limitations of an MoA. Of course, you don’t need a mathematical model for that! That said, an MoA where the causal links have been clearly identified and supported by experimental data can benefit from systems modeling, as the behaviors of the system can be laid out more thoroughly and explored. This is tremendously useful for complex modalities such as ADCs or radiotherapeutics, where the individual elements of the model are not in doubt, and the system is capable of behaving in ways that are unintuitive. Models that focus on pharmacological mechanisms (target binding, internalization, payload release) are far more tractable to these kinds of systems approaches than those that focus on biological mechanisms (pathway signaling).
At its heart, though, understanding and leveraging an MoA effectively is about rigorous pharmacology and attention to detail, and computational models can play – at best – a supporting role in the exercise.
?
An MoA Is A Hypothesis, And Clinical Trials Are Its Test
If there’s one thing to take away from this article, it is this — a mechanism of action is a hypothesis. It’s validated at the point at which mechanistic PD is connected to disease modification in the clinic, according to the predictions that were made from it. The role of preclinical work should be to frame the clinical trial in a way that it acts as a clean test of the hypothesis.
The epistemological aspect of deriving an accurate MoA from preclinical data and understanding exactly what is (and isn’t) known about the MoA is crucial, as false certainty can paint a program into a corner. In a subsequent article, we’ll put these ideas into practice, talking about the tactical aspects of moving a candidate drug forward under conditions of low certainty about its MoA.
Like Captain Ahab’s obsession with Moby Dick, fixating on the wrong MoA can lead to ruin. An inaccurate picture of the MoA can quickly sink a program, leading to poor choices for dose schedule, dose route, patient population, and indication. Having an accurate picture of the MoA of a candidate drug, on the other hand, serves as insurance for the program, adding weight to the clinical results if they fall short due to trial design limitations and facilitating rational troubleshooting.
Anchoring drug development in careful inference and epistemic rigor allows a project team to extract what they need from an MoA to move a program forward and helps them avoid the fate of the Pequod.
-Arijit Chakravarty and Madison Stoddard
?
?
?
R&D Innovator | Cell & Gene Therapy | Immunotherapy | Synthetic Biology | Protein Engineering | Discovery | Product Development
1 周Great and insightful overview! In agreement that "The role of preclinical work should be to frame the clinical trial in a way that it acts as a clean test of the (MOA and therapeutic effect) hypothesis." Successful R&D greatly benefits from experienced individuals with insights from results and studies that are not published or reported. Sometimes this is knowing "what not to do rather than what to do". Negative results and failures are not reported and disclosed...an inherent informational bias exists. Animal models are best used to test effects on specific mechanisms but are often used without knowing enough about the specifics of the animal model or the therapeutic correlation/relevance to the human condition. This should also serve as a harbinger for solely relying on an AI approach using published or reported information for some aspects of R&D due to inherent bias of the dataset.
Vice President, Product Development at EyePoint Pharmaceuticals
1 个月Clear, thoughtful and insightful overview Arijit!