The Curse of the Pequod:
How to Sink Your Drug R&D Program

The Curse of the Pequod: How to Sink Your Drug R&D Program

You guessed right, this is a Moby Dick allusion. Most of us are pretty familiar, at some level, with the story. The hunt for the white whale is a tale of obsession and ruin. Captain Ahab, with his singular fixation on Moby Dick, pursued his target relentlessly across the seas, disregarding the warnings of others, and the risks to his boat, the Pequod, and its crew. Ultimately, Ahab’s fixation with a bad idea cost him his life.

Drug discovery and development can feel like Ahab’s hunt: a long and perilous journey chasing success that lies just over the horizon. In pursuit of curing diseases and saving lives, scientists and companies often fixate on a single mechanistic hypothesis — a “white whale” they believe will unlock the mysteries of a condition and lead to transformative therapies. But, as with Ahab, such a fixation can sometimes sink the entire endeavor.

At a deeper level, the hunt for the whale can also be seen as a metaphor for an epistemological quest — "man’s search for meaning in a world of deceptive appearances and fatal delusions*.” In this article, we will discuss how force-fitting a mechanistic narrative, or following the wrong mechanistic hypothesis, can and often does lead a drug discovery and development program to its (metaphorically) watery grave.

?

The Mechanism of Action Can Be Really Useful in Drug Development?

Before we go any further, though, it’s important to get one thing out of the way. Knowing the Mechanism of Action (MoA) of your drug is a good thing. If you have the correct MoA in hand, that can be incredibly valuable for a drug development program.?

To gain regulatory approval, a drug must show efficacy in treating the targeted disease or condition. The main goal of Phase II trials is to establish a clear and convincing link between the treatment and clinical improvement. Efficacy is shown by demonstrating that the observed benefits in the treatment group are not only statistically significant but also causally related to the intervention itself. This requires ruling out alternative explanations, such as chance, bias, or confounding factors, so that you can confidently attribute the observed disease modifying effects to the drug.

Now, in a perfect world, a well-designed trial — sufficiently large, randomized, and representative — should provide robust evidence of causation, making it clear that it’s the intervention that’s driving the outcomes. In the real world, given the constraints of clinical studies, things don’t always work out so cleanly. Here’s where a solid understanding of the hypothesized mechanism of action can add substantial credence to the clinical results. When the mechanism is well-established and biologically plausible, it lends credibility to the efficacy claim. To the extent that a project team has a good handle of the MoA, it can also be incredibly useful in terms of guiding thinking around hypotheses to be tested around combinations, dose schedule and indication selection. A valid MoA, leveraged thoughtfully in the decision-making process can make the difference between success and failure for a program.?

Currently, mechanistic evidence is often treated inconsistently during drug approval processes. Phase III trial data are meticulously scrutinized, while discussions of mechanisms tend to occur informally during approval meetings, relying heavily on expert opinions without a standardized framework for assessing their relevance. This ad hoc approach underestimates the importance of mechanisms in the overall evidence base.

All of that is a best-case scenario, which assumes that you have the right MoA in hand. But a lot can go wrong — and often does.

??

When Does the MoA Fail to Deliver Value?

Mechanism of Action (MoA) failures represent a subtle, but surprisingly common, failure mode in drug development. While mechanistic insights can guide development, they often falter for one of the following reasons:

  1. The MoA is valid, but it doesn’t happen at the Maximum Tolerated Dose (MTD): A drug may achieve its desired MoA only at doses that are too toxic for clinical use. For example, an enzyme inhibitor being developed as an oncology drug may require concentrations that cause unacceptable off-target effects to sufficiently inhibit the target. Often, the same enzyme may have functions in both tumor and healthy tissue, so that the on-target toxicities kick in before the antitumor effects do in humans. This is the most likely explanation for the rash of failures of programs in the cancer metabolism space, where dose-limiting toxicities prevent the target pathway from being fully engaged at the Maximum Tolerated Dose (MTD). (While tumors don’t do well when the Krebs Cycle, is inhibited, neither do people. And unlike people, tumors can quickly evolve their way around things that kill them).
  2. The MoA is valid, but it doesn’t happen at the clinically selected dose: Sometimes, clinical studies are not designed to rigorously establish pathway inhibition at the Recommended Phase II Dose (RP2D). As an example, IGF-1 receptor (IGF-1R) was a highly sought-after target twenty years ago, with promising mouse efficacy data and low levels of toxicity in humans. Despite 183 clinical trials and billions in expenses, these therapies failed to translate to humans. In most cases, no MTD was identified during the trial, and patients showed no responses either. Was the MoA fully engaged at the clinically selected doses for the IGF-1R programs? It’s far from clear. This type of risk is entirely avoidable —asking the question “is the MoA fully engaged at the RP2D” is a critical go/ no-go stage gate. Building in a translational pharmacology strategy that focuses on target/MoA engagement can keep you from going down a path that ends in an expensive Phase III failure and five wasted years of clinical studies (ask us how!).
  3. The MoA happens only in preclinical models, not in patients (or the other way around):Sometimes, a mechanism is specific to the animal models being used to study the candidate drug preclinically. This can happen with preclinical efficacy models, such as in the case of the STING agonist DMXAA, which induces an increase in cGAS-STING signaling in mice. Unfortunately, this response was not observed in humans, leading to the drug’s failure. Sometimes, the opposite can happen too, which can be especially problematic when it comes to toxicities. For example, In the case of the TNF-α targeting monoclonal antibody TGN1412, primate toxicology models failed to predict the drug’s T cell superagonism in humans. Despite a conservative 500-fold safety factor applied to the no adverse effect level (NOAEL) derived from primates, all six participants in the first dosing cohort experienced catastrophic organ failure requiring intensive care. In this case, neglecting species-specific differences in the mechanism led to disaster for the program.
  4. The MoA is different from what you think it is: Mischaracterized mechanisms can paint a program into a corner. The compound unesbulin was originally packaged as causing downregulation of the B-cell–specific Moloney murine leukemia virus insertion site 1 (BMI1 protein), leading to the eradication of “cancer stem cells” expressing this protein. As it so happens, the downregulation of BMI1 protein levels and function is a secondary event effect of the ?inhibition of tubulin polymerization by PTC596, which (not surprisingly) leads to mitotic arrest and cell death. While the cancer stem cell hypothesis rests on shaky ground at best (more about that later), the efficacy of microtubule-depolymerizing agents in cancer is well established. So, as the sponsor chugged along with a clinical development plan in leiomyosarcoma, based in part on an ongoing commitment to the BMI1-inhibition-kills-cancer-stem-cells narrative (and a tenuous connection between leiomyosarcomas and this mechanism), it missed the opportunity to explore the activity of its novel tubulin inhibitor (with a potentially superior toxicity profile) in a range of other cancers. As of now, unesbulin is no longer listed on the sponsor’s pipeline page, and there are no active clinical trials. The sponsor’s early commitment to the BMI1-inhibition-kills-cancer-stem-cells hypothesis complicated a pivot based on an alternative (and plausible) MoA. Sometimes, this situation can arise in a slightly different way, which brings us to the next point.
  5. The MoA was window dressing all along: Situations can arise where a company knows internally that its mechanism of action is not quite right. For example, a drug might act through a certain MoA, which is not particularly “trendy.” In the example above, it’s way cooler (in some circles) to say, “this drug kills cancer stem cells” than to say, “yeah, it’s a tubulin depolymerizing agent.” So, let’s pretend (as a thought exercise) that in the situation above, the sponsor knew already that the BMI1-inhibition-kills-cancer-stem-cells hypothesis was wrong from the get-go. The sponsor might still power ahead with the wrong mechanism, on the assumption that investors and clinical investigators might find it more compelling. What’s the harm in a little window dressing, after all? Misdirection about the MoA can drive wrong choices being made at every step along the way (as it did for unesbulin above). It seems bizarre to think that a company would intentionally run with the wrong MoA, but this happens more often than you think! We’ve seen this happen in our own careers fairly often, even if it’s almost always something that is only visible to people who are immersed in the science of a program. It usually ends badly.?
  6. The data was just wrong: Not uncommonly, the basic science on MoA can be misleading for statistical or technical reasons. There have been a slew of reports in recent years showing that much of the published literature is not reproducible. While the ‘replication crisis’ was originally thought of as a social sciences issue, studies investigating reproducibility in the life sciences have revealed similar problems. In the next section, we'll discuss this in more detail.


Ways in Which Mechanistic Data Can Sometimes be Wrong

The idea of mechanistic data being wrong opens up a rabbit hole, unfortunately, but it is sometimes possible. There are several ways in which might come to pass, and it’s worth touching on them briefly.

  1. p-hacking and other questionable research practices: This happens when a large number of different hypotheses are tested with the same dataset, and the only one that’s published is the positive one. This is usually a misuse of the scientific method, and there are many different tactics that are used in this, sometimes even unconsciously by investigators. Some of these tactics (such as cherry-picking and fishing expeditions) are better known than others (HARKing), but they all serve the same purpose, turning negative results into positive ones. (It’s worth clicking on the two preceding links, by the way. The tactics used for p-hacking can be really insidious and are well worth being aware of!)
  2. False Discovery Rates: A related, but slightly more subtle problem arises from an underlying property of p-values. Let’s say you are using a hypothesis test with a significance cutoff of 0.05 (which is actually a false positive rate) to do all of your science. If the number of true positives is exceeded by the number of true negatives (in other words, if most hypothesis tests should yield a negative result), then the occasional true positives will be swamped by the (low frequency, yet large in number) false positives. This is why clinical screening tests for rare conditions aren’t that useful – because most of what they find end up being false positives. The same problem applies to published research and clinical trials.
  3. Publication bias: Closely related to this, there is a tendency for results that contradict an established narrative to never see the light of publication. In some cases, this happens due to an investigator’s choice to not publish data that they see as unexciting. In other cases, the community as a whole, acting in good faith, sets the standard for peer-review much higher for results that contradict their mental picture. As an example, there are many papers supporting the established paradigm of linkages in phosphorylation status for the Ras-Raf-Mek and PI3K pathways. If you try to reproduce it in your own hands, you’ll find that the story doesn’t pan out. However, there’s no paper (or even a review) that directly calls out the paradigm of oncogenic signaling painted in the literature as being wrong. Publication biases can profoundly shape our understanding of — and confidence in — any given scientific result.
  4. Wrong picture of the system: In some cases, research findings lack reproducibility because they ignore the underlying dynamics of the system. For example, mistaking a dynamic system for a static one (assuming your system is in stable equilibrium state when it is oscillating), or mistaking a stochastic system for a deterministic one (studying the evolution of competing subpopulations of cancer cells using a network diagram of signal transduction, which assumes a homogeneous population) will lead to findings that don’t reproduce.
  5. Fraud: While rarer than the other causes, outright scientific malfeasance is another potential explanation for reproducibility issues. Alzheimer’s Disease research is an example of a field that has been plagued with this issue. Several high-profile papers? that formed the basis of mechanistic hypotheses being tested in clinical trials turned out to be based on manipulated data. In a separate scandal, the head of the NIH’s Division of Neuroscience, a highly-ranked scientist in the field, was found guilty of fabricating data. Fraud is a growing concern in scientific research, and it can sometimes pose a threat to our understanding of the world.
  6. Lack of standards: At least some of the lack of reproducibility is very likely due to operational shortcomings that are potentially rectifiable. These include: a lack of reagent, cell line and assay validation; underpowered studies; incomplete or inaccurate methods sections; and lack of transparency in biospecimen procurement. As more attention is drawn to the replication crisis, hopefully the implementation of practical operational improvements in life science research will gain steam.

On the whole, talk about a lack of reproducibility veers in one of two directions — either people tend not to think about it, or they take a position of epistemological nihilism (“you can’t trust any of the preclinical research, so let’s just take it into the clinic and see what happens”). Both approaches are flawed, as we will discuss in depth later. A better approach is to build a mechanistic understanding from papers spanning multiple labs and using a variety of different experimental techniques. Then, once you have a working hypothesis of your mechanism, focus in-house experimental efforts on confirming the aspects of the MoA that are critical for the development strategy. This approach is not a panacea — it won’t help you in situations where the field as a whole is working with a wrong mechanistic paradigm. But it will reduce your risk exposure when it comes to most of the other potential sources of wrong mechanistic information.

Before we go further, it’s worth pointing out that wrong turns with MoA can happen to anyone. The reasons behind a failure of MoA are complex and subtle, and some of it — like Moby Dick and Ahab — is deeply embedded in human nature.

At its heart, constructing a mechanistic narrative is an epistemological quest, driven by a desire to build a clear and rational picture from (often messy) biological and clinical data. This is a noble and often critical activity, but it can also be a trap in certain situations. Let's dig in a little more to see how that happens.

Commonly held beliefs don't have to be true.


Belief And Knowledge Are Different

We often think of biology and medicine advancing in a steady, linear fashion, with each new discovery adding to a solid foundation of knowledge. But, in reality, the path of progress is rife with detours and dead ends. Many beliefs that were once celebrated as breakthroughs would now seem strange to us — the idea that neurons formed a continuous network throughout the body (1906 Nobel Prize for Medicine) or the idea that lobotomies are a “cure” for depression or anxiety (1949 Nobel Prize), for example.

In the moment, though, widely held beliefs are indistinguishable from knowledge. Often, it is these broadly accepted beliefs that bear the most scrutiny, much like the sentence “it is widely known that” in a paper (the one that usually ends without a reference to support it).?

An example of a widely held, but misleading, belief in biology that has direct implications for drug discovery and development is the idea that antimitotic drugs (for example microtubule targeting drugs such as taxanes, and those targeting spindle proteins, such as Aurora and Polo kinase) cause mitotic arrest followed by apoptosis. While this is often the case in hematological tumors, live-cell video microscopy studies in the early 2000s overturned this simplistic picture in solid tumor cells. Antimitotic drugs cause a transient mitotic arrest in many solid tumor types, followed by heterogeneous terminal outcomes (death or stable cell-cycle arrest). It was also shown that even transient mitotic arrest can lead to the induction of DNA double-strand breaks, which profoundly impair tumor cell viability. If you’re not aware of the fact that both prolonged mitotic arrest and apoptosis are neither necessary nor sufficient for mitotic disruption to cause a loss of tumor cell viability, you could draw some profoundly wrongheaded conclusions about how to develop antimitotic agents (as can be seen in this review, which jumped off the deep end on precisely this point, failing on several counts to grasp the lack of mechanistic significance of mitotic arrest, and teeing up straw man arguments such as: “studies have shown that inhibitors targeting mitosis cannot be expected to stabilize tumor growth by arresting cells in mitosis for prolonged periods of time”).

If you start from the wrong set of assumptions, you will eventually make your way to an incorrect inference. In the case of antimitotics, the belief that they exert their effect through interphase microtubule disruption has become fairly widely held in the community (despite the absence of toxicities in non-dividing tissues for mitotic kinase inhibitors).

Belief and knowledge are not the same thing: widely held beliefs can still be (and often are) wrong. This has significant implications for the utility of mechanistic work.

?

The Seductive Promise of Mechanistic Certainty?

Most scientists in drug discovery and development come from a biology background, with Ph.Ds. in reductionist fields (such as molecular biology, biochemistry, or genetics).

Things get really tricky when working on a new program, because the program is rarely in the exact field that we did our Ph.D. in. (Speaking for myself, I’m a biochemistry Ph.D. who studied mitotic spindle assembly, can you tell?)

As such, we are trained to think in terms of a stepwise linear pathway of events:

From my own experience working as a project lead for multiple programs in preclinical and early development, there is a real danger of latching on to this search image too early. Our brains are wired to seek certainty, which is a quality that is in short supply for drug discovery programs based on emerging science.

So, we jump into a new subfield, feet first, and there’s a stack of papers that in front of you (literally or metaphorically speaking). And then, in the middle of all that sensory overload about proteins that you’ve only vaguely heard of before and findings that seem to contradict each other in model systems that are all subtly different from each other, along comes a figure like this:?

The Ras-Raf-Mek pathway

And of course, suddenly everything falls into place. There’s a logical structure, here is a cascade of oncogenes that all phosphorylate each other in a neat linear sequence. It all makes sense now. Ras overactivation leads to Raf hyperphosphorylation leads to Mek hyperphosphorylation leads to Erk hyperphosphorylation.

Using this paradigm to read papers on the Ras/ Raf/ Mek pathway can make everything seem very straightforward. The MoA outlined here can guide patient selection, dose scheduling, combination therapies and so much more!

There’s just one small catch. It’s wrong.

Not wrong in the sense that every single thing about it is wrong. If you inhibit Raf signaling in a cell line that’s expressing pMek, pMek signaling will be inhibited as well, just like the diagram says.

But pretty much no cell line or tumor will ever show all four of these proteins constitutively hyperphosphorylated, because tumors in tissue culture and in the clinic are constantly evolving. This staggering heterogeneity is the signal, not the noise, in clinical cancers. Figures like this depict “frankencells,” cobbled together from a vast array of published papers that usually focus on one or two steps in the cascade in a handful of cell types. While the individual papers are sometimes (but not always) valid, the aggregate picture is misleading. If you try to reproduce that work in-house, with cell lines or in vivo, you will find that it doesn’t pan out. (These negative results, from my own experience running in vitro and in vivo pharmacology teams working on this pathway, are not publishable!) To make matters worse, the last step in that sequence (from Erk to “growth, proliferation and survival”) is vague. While Erk inhibition does kill and arrest cells, that outcome has been demonstrated to be associated with lesions that are known to be lethal to cells (such as DNA double-strand breaks and mitotic disruption). “Pro-survival signals” are as useful in a mechanism of action as they would be in day-to-day life. We don’t say “don’t damage your brain, it’s sending pro-survival signals.”

None of this is to argue that Raf lacks value as an anticancer target, or that Ras-Raf-Mek-Erk signaling is not a thing, and it certainly doesn’t argue against the use of MoA in drug development. But the “oncogene addiction” paradigm that this mechanistic diagram outlines has not borne out useful insights in clinical development. Just for this one pathway, hundreds of programs (and many billions of dollars) have been poured down the drain in the labs of Cambridge alone. While Raf inhibition has been successful, Ras, Mek and Erk inhibitors were never approved clinically. Patient selection based on the simplistic view of MoA that the oncogene addiction paradigm promises has not succeeded either, despite review papers promising us that it was just around the corner for decades now. (Incidentally, one can argue that the “frankencell” problem is particularly rampant in Nature Reviews papers, which seem to specialize in promoting a false sense of certainty around mechanism of action).

In neurodegeneration, the beta-amyloid hypothesis of Alzheimer’s disease has its origins in the discovery of the disease over a hundred years ago, dating back to an era when scientists believed that the structure of the brain determined its functions. This theory posits that accumulation of beta-amyloid plaques is the primary driver of neurodegeneration. The hypothesis has been controversial for decades, and hundreds of clinical trials based on the beta-amyloid hypothesis have failed. Approved therapies based on the hypothesis have shown incremental benefit, at best, with significant toxicities. Many drug candidates have shown a reduction in beta amyloid plaques without demonstrating an efficacy benefit, and the underlying mechanistic hypothesis has grown increasingly complex. Making matters worse, the field as a whole has been plagued by findings of fraud. Still, the Alzheimer’s foundation has maintained a rigid belief in the validity of the mechanistic hypothesis, billions of dollars in NIH funding are awarded to research focused on the hypothesis, and clinical trials based on the beta amyloid MoA crop up every year. The mechanistic hypothesis is in trouble, and a dogmatic belief in it is likely holding back Alzheimer’s Disease research. But you might not realize that if you don’t read the coverage with a critical eye!


Epicycles - orbits within orbits- were (allegedly) invented by astronomers committed to the wrong model of the Solar System.

?

Linguistic Epicycles: Hallmarks of a Paradigm in Crisis

For both the Oncogene Addiction and the Beta Amyloid hypotheses, new studies have tended to lead to a subtle reframing of the hypothesis, rather than confirming it. For example, when patient selection efforts that flowed directly from oncogenic pathways failed to show clinical benefit (e.g., Mek overexpressing tumors don’t show increased clinical benefit with Raf inhibitors), the mechanistic narrative changed. Some would argue that the “oncogenic addiction” was context dependent, others spoke of “oncogenic shock,” and still others spoke of “oncogene amnesia.” Over time, the crystal clarity of the original hypothesis has become muddied with caveats. As each new finding undermines a different tenet of the mechanistic hypothesis, we can expect to see the hypothesis be reframed.

Similarly, with the Beta Amyloid hypothesis, their relevance continues to be argued for by proponents, with the idea morphing from the “plaques are causal in neurodegeneration” to the “plaques are associated with neuroprotection.”

These mechanistic “epicycles” serve to keep the hypothesis alive but continue to add complexity to the original mechanistic hypothesis. Adding (metaphorical) epicycles is a sure-fire way to ensure that your theory doesn’t hold up to?Occam’s razor.

In Thomas Kuhn’s seminal book, “The Structure of Scientific Revolutions,” he describes a "paradigm in crisis" as a scientific framework or model that no longer adequately explains observed phenomena or resolves critical problems in its domain. There are some tells that can clue you in to when a proposed MoA, no matter how widely accepted by practitioners in the field, is a paradigm in crisis:

?1.???? Accumulation of anomalies: Anomalies pile up, and persistent issues crop us that the canonical paradigm cannot explain. These anomalies are not easily dismissed as experimental errors or minor deviations.

2.???? Loss of predictive power: As new data shows up, the paradigm itself is updated. In other words, predictions flowing from the paradigm are wrong, and the paradigm is changed to accommodate new data.

3.???? Special pleading: The paradigm is ‘patched up’ with assumptions or ad hoc modifications that amount to case-by-case reasoning or special pleading. The simplicity of the original paradigm is lost in a thicket of exceptions.

4.???? Decline in confidence: There are rumblings of skepticism about the paradigm's validity. You usually have to go digging for these in second or third-tier journals, as Nature and Science will remain committed to the orthodox view to the bitter end.?

5.???? There are other explanations: Alternative frameworks begin to appear, often driven by new ideas, methodologies, or technologies.

Eventually, the crisis resolves with the emergence of a new paradigm, but this process can take decades. (Kuhn’s book is a real page-turner, and well worth the time if you’re looking for a relatively quick read that makes you think about the way in which we learn about things in science.)

What does this mean for those of us engaged in or using mechanistic work in drug discovery and development? Academic scientists make their homesteads on a patch of intellectual ‘land’ and farm it their whole lives. Those of us in industry, on the other hand, are nomads. New projects involve new diseases, new pathways and a fresh stack of papers to master. We rely on the literature published almost entirely by academic scientists in order to find our way around unfamiliar lands. The paradigm is the implicit worldview that influences how data is interpreted in a field.

Learning to be able to spot the tells of a paradigm in crisis is a valuable skill in this context. Coming back to our original example — say you’re learning about a new field, and the first paper you read (the Nature Reviews paper, of course!) makes it all look very straightforward. Then you read a set of reviews (in lower-ranked journals) and you notice that there are subtle tweaks to the original mechanism, and the experimental papers are even more contradictory. If you hear it said that microenvironment and context are important in understanding the MoA, your antennae should go up. You might be looking at a paradigm in crisis.

?

The Devil is in the Details When It Comes to MoA

Wrong data, paradigms in crisis, false assumptions. It all sounds a bit discouraging.

At this point you might find yourself thinking, “so then why not just put it in patients and see what happens”? Well, it turns out that this is a bad idea (too)! The big problem with an excessive focus on empiricism, especially when it’s conducted in an ad hoc way, is that you might not end up learning anything at all from the clinical trial if it fails. Epistemic nihilism is self-fulfilling.

?Both during preclinical and clinical drug development, there are many choices that need to be made (dose route, schedule, indication) – these choices are easy to make, but difficult to get right. We’ve discussed before that being able to rationally make those choices is a big part of what drives the difference between the success and failure of programs.


A pharmacological audit trail helps map out the steps between drug dosing and efficacy (or toxicity).


The success or failure of a drug ultimately hinges on the Therapeutic Index (TI) — the ratio between a drug’s toxic dose and its effective dose. The make-or-break question during development is: at the Maximum Tolerated Dose (MTD), is there enough drug at the site of action to inhibit the target sufficiently to cause disease modulation. Because the TI is determined by the choices made during development, it is critical to be able to deconstruct the impact of specific choices on the TI, and breaking down the action of the drug into steps makes this process easier. Setting up a pharmacological audit trail that links dose to pharmacokinetics (PK) to pharmacodynamics (PD) is a key piece of what drives rational decision-making during development. (See this article for more on that topic.)

As long as a PD biomarker is on the causal chain between target inhibition and disease modulation, it retains its utility in the audit trail. As you can see from the diagram, the PD biomarker in the top panel must be modulated (at least to some extent) in order for the downstream effect to kick in. In the bottom panel, this is not the case.? We know this part intuitively. First-generation antihistamines such as diphenhydramine block histamine and relieve itching, but they also make you drowsy. Second-generation antihistamines such as loratadine (Claritin) revolutionized the treatment landscape when it was released in 1993, as it was non-sedating and safe, while still being highly effective against allergies (wide therapeutic index). So, if you used sedation as a biomarker for antihistamine activity, you might have been able to make rational decisions for diphenhydramine. In particular, if you took Benadryl and you weren’t feeling even a bit drowsy, chances are you might not have taken enough. On the other hand, the sedation was not causal for antihistamine effect, so the lack of sedation with Claritin could not be used to infer a lack of effect. The context matters, and non-causal biomarkers aren’t really all that useful.


The PD biomarker in the top panel is causal, whereas the one in the bottom panel is not.

Now here’s where things get interesting: a short while back, we discussed the Ras-Raf-Mek pathway, and we talked about how the pathway was essentially aspirational, as the oncogenic “addiction” that the mechanism promised never bore out in practice. Because tumors are not actually “addicted” to the pathway, the inhibition of downstream pathway biomarkers in the cascade (for example phospho-Mek (pMek) or phospho-Erk (pErk)) is not sufficient to guarantee efficacy in a tumor cell line, xenograft model or patient.

But Ras-Raf-Mek-Erk signaling is a real thing — Mek and Erk lie on the causal chain for Raf signaling. In other words, the inhibition of the downstream markers is necessary for efficacy. This is a subtle but critical point — necessary but not sufficient biomarkers still provide useful information. If you have a Raf inhibitor that fails to inhibit pMek or pErk at clinically relevant concentrations, that inhibitor is unlikely to show robust efficacy in the clinic. So, such biomarkers can still be used to set up go/no-go decisions in the clinic.

?It may sound contradictory at first blush to say that the Oncogene Addiction hypothesis has failed to deliver on its promise, but biomarkers based on the Ras-Raf-Mek pathway can still provide utility in clinical development. The nuance is important, though. The key to understanding this discrepancy is that tumors are not addicted to Raf — they don’t require it for survival. That said, if you're trying to develop a selective Raf inhibitor and it fails to inhibit the pathway in patients at the MTD, you're not going to see efficacy.


Biological and pharmacological views of the same pathway operate at different levels of abstraction.

The “biological” view of the pathway on the left is based on a wrong mechanistic hypothesis (“Oncogene Addiction”). Tumors that are sensitive to Raf inhibition (which may or may not be the ones overexpressing Raf) can quickly evolve to develop resistance to it, making Raf overexpression a poor basis for patient selection. On the other hand, the “pharmacological” view of the pathway on the right is using the downstream biomarkers to infer the extent of pathway inhibition. This can be incredibly useful during development, provided it is interpreted with care. This is a subtle point, but a crucial one. As long as a biomarker is on the causal chain, even if it’s necessary but not sufficient, it can provide useful information. Rigor and caution can draw actionable insights where false confidence might sink a program.


Still a pig.


A Systems Model of a Wrong Mechanism is a Wrong Systems Model

At Fractal Therapeutics, we focus on model-based drug development, so at this point you’re probably expecting me to say, “luckily, mathematical modeling can fix your problems for you!”

I’m sorry to inform you that a pig with lipstick on it is (checks notes) still a pig. Getting the mechanism wrong, as we’ve seen earlier in this article, can tee up a set of faulty assumptions that then drive the wrong decisions. Even a subtle difference in the understanding of how a mechanism works (for example, viewing Raf as an essential gene whose inhibition can kill some tumors versus an oncogene providing essential “growth signals” to “addicted tumors”) can move a putative MoA from the asset to the liability column for a drug development project.

It goes without saying, then, that large systems models of mechanisms should be approached with extreme caution during drug discovery and development. Many such models are built with correlative data (usually big data), which lack the ability to assign causality to elements in the model and can sometimes be developed entirely from in vitro datasets. Such complex “mechanistic” models can often make great Nature or Science papers, but they are not suitable for decision-making in the real-world setting of drug development. Using such models, even as window dressing, can tee up inaccurate expectations for a program and drive teams to make mistakes in their development choices. Some models are built in a high-throughput fashion with “frankencells” cobbled together from published literature. In much the same way as Nature Reviews articles, these models cause more problems than they solve!

Done right though, model-building can be very useful as an exercise when working with a putative MoA, especially if the process of model building clearly identifies the assumptions underlying the MoA and surfaces them for discussion. Simply articulating each assumption explicitly and identifying the specific data that supports each assumption is a good way to understand the limitations of an MoA. Of course, you don’t need a mathematical model for that! That said, an MoA where the causal links have been clearly identified and supported by experimental data can benefit from systems modeling, as the behaviors of the system can be laid out more thoroughly and explored. This is tremendously useful for complex modalities such as ADCs or radiotherapeutics, where the individual elements of the model are not in doubt, and the system is capable of behaving in ways that are unintuitive. Models that focus on pharmacological mechanisms (target binding, internalization, payload release) are far more tractable to these kinds of systems approaches than those that focus on biological mechanisms (pathway signaling).

At its heart, though, understanding and leveraging an MoA effectively is about rigorous pharmacology and attention to detail, and computational models can play – at best – a supporting role in the exercise.

?

An MoA Is A Hypothesis, And Clinical Trials Are Its Test

If there’s one thing to take away from this article, it is this — a mechanism of action is a hypothesis. It’s validated at the point at which mechanistic PD is connected to disease modification in the clinic, according to the predictions that were made from it. The role of preclinical work should be to frame the clinical trial in a way that it acts as a clean test of the hypothesis.

The epistemological aspect of deriving an accurate MoA from preclinical data and understanding exactly what is (and isn’t) known about the MoA is crucial, as false certainty can paint a program into a corner. In a subsequent article, we’ll put these ideas into practice, talking about the tactical aspects of moving a candidate drug forward under conditions of low certainty about its MoA.

Like Captain Ahab’s obsession with Moby Dick, fixating on the wrong MoA can lead to ruin. An inaccurate picture of the MoA can quickly sink a program, leading to poor choices for dose schedule, dose route, patient population, and indication. Having an accurate picture of the MoA of a candidate drug, on the other hand, serves as insurance for the program, adding weight to the clinical results if they fall short due to trial design limitations and facilitating rational troubleshooting.

Anchoring drug development in careful inference and epistemic rigor allows a project team to extract what they need from an MoA to move a program forward and helps them avoid the fate of the Pequod.



-Arijit Chakravarty and Madison Stoddard

?

?

?

James Onuffer

R&D Innovator | Cell & Gene Therapy | Immunotherapy | Synthetic Biology | Protein Engineering | Discovery | Product Development

1 周

Great and insightful overview! In agreement that "The role of preclinical work should be to frame the clinical trial in a way that it acts as a clean test of the (MOA and therapeutic effect) hypothesis." Successful R&D greatly benefits from experienced individuals with insights from results and studies that are not published or reported. Sometimes this is knowing "what not to do rather than what to do". Negative results and failures are not reported and disclosed...an inherent informational bias exists. Animal models are best used to test effects on specific mechanisms but are often used without knowing enough about the specifics of the animal model or the therapeutic correlation/relevance to the human condition. This should also serve as a harbinger for solely relying on an AI approach using published or reported information for some aspects of R&D due to inherent bias of the dataset.

Michelle Howard Sparks, PhD

Vice President, Product Development at EyePoint Pharmaceuticals

1 个月

Clear, thoughtful and insightful overview Arijit!

要查看或添加评论,请登录

Arijit Chakravarty的更多文章

社区洞察

其他会员也浏览了