What Does This Thing Even Do?
How To Discover And Develop A First-In-Class Drug

What Does This Thing Even Do? How To Discover And Develop A First-In-Class Drug

First-in-class drugs are enigmas, puzzles that guard the ultimate prize in drug development. A breakthrough therapy that leverages a biological mechanism that has never successfully been targeted before can truly change lives. Being first in class has also been consistently shown to provide a market advantage – all other factors being equal, market share declines the further away a molecule is from being first to launch. The average market share for a first-in-class drug is five times higher than that of a fifth-in-class drug. Gleevec, Ozempic, Keytruda, Claritin – many first-in-class drugs were transformative for patients and blockbuster drugs as well. Solving the first-in-class puzzle can be an attractive challenge if you’re up for it.

However, the level of technical risk for first-in-class drugs is at least an order of magnitude higher than for me-too molecules. Every drug candidate has to clear at least two basic hurdles on the road to approval. First, the drug must modulate the disease that it’s being developed for with a therapeutic index (TI, the gap between the efficacious and toxic dose) that’s broad enough to be acceptable given the risk-benefit ratio. (An acceptable TI for oncology is often little more than one, whereas a lifestyle drug aimed at chronic use needs a TI that’s an order of magnitude or two higher). Second, it should improve upon the existing therapeutic options for the disease it’s being developed to treat.

Those of us who’ve been in the business know that this deceptively simple list is a tall order. It’s not trivial to project a therapeutic index for a candidate drug from preclinical to clinical settings, much less compare it to the competition. As a result, all too many development programs end with expensive Phase II clinical trial failures (resulting from a failure to clear the first hurdle above), or worse, Phase III failures (resulting from failing the second hurdle).

A first-in-class drug faces an additional, pivotal, question: will modulating the drug target even yield a meaningful therapeutic effect? For a disease that lacks good therapeutic options (high unmet medical need), this question can feel daunting.


Can it even be done? Often, with such diseases, the need is unmet for a good reason. Aspirations alone are not enough – many companies have fallen by the wayside while trying to advance “game-changing” treatments for pancreatic cancer, Alzheimer’s, acute heart failure and COPD (Chronic Obstructive Pulmonary Disease). Attacking such large problems requires more than just a bold vision.

Even beyond the fundamental mechanistic question, there are several other challenges lurking along the way for developers of first-in-class drugs. First, they lack benchmark data. For me-too drugs, developers can leverage the clinical and preclinical data from earlier drugs targeting the same pathway or disease. First-in-class drugs, on the other hand, lack this safety net. Second, the mechanism of action of the drug target is based on incomplete or evolving scientific understanding, increasing the risk of unexpected outcomes (and adverse events), and requiring the development of novel biomarkers. Third, drugs targeting diseases with unmet clinical need often have subjective or poorly defined clinical endpoints. Finally, first-in-class drugs also face greater regulatory scrutiny, as agencies may demand comprehensive data on safety and efficacy due to the novel mechanism or target. Taken together, all of these challenges suggest that first-in-class drug programs face a lower likelihood of success relative to me-too programs. Especially given the greater upside, it can be tempting to view first-in-class drug programs as an all-or-nothing gamble.

All too often, teams approach first-in-class drug discovery and development with an attitude that’s more befitting of an adventurer making their way to a dragon’s lair. They start with a Big Idea – inhibiting Protein X will kill pancreatic tumors (or prevent dementia, take your pick). This idea is often based on a single paper, which usually made its way into Nature or Science. (If it’s published in Nature or Science, it’s got to be right! Right?) They’ll play animal-model-roulette until they find data that supports their claim. And from there, they venture boldly into the clinic, in hopes of a favorable outcome. The story usually ends badly – with the passing of time, the bones of their idea are tossed onto the pile with all of the others.

When a first-in-class program meets a grisly end in Phase III, the seeds of that failure were often sowed years before. The outcome of a program is heavily based on choices that are made during discovery, preclinical or early clinical development. Many of us in pharma or biotech R&D enter the field with one of two backgrounds – as biologists or as doctors. These fields come with their own perspectives on drug development, and as it happens, these perspectives are often the source of the problem. Biologists (or others trained in reductionist sciences) often focus intensely on “mechanistic” reasoning for their program. On the other hand, doctors (or people with clinical experience) privilege clinical “data”, often meaning individual patient responses above all else. While each of these perspectives is valuable, they form a poor basis for a decision-making framework during drug discovery and development.

Let’s look in a little more detail at how these perspectives can drive a program to make the wrong choices.

?

?Mechanistic Pursuit Can Lead You To A Dead End

The etiology (cause) and pathophysiology (mechanism of progression) of disease are often framed in overly simplistic ways by those of us with formal training in reductionist biology. We tend to view it as a direct consequence of a malfunction in a single protein or pathway. As a biochemistry Ph.D., I can testify that this is mostly how we were taught about disease pathology in grad school (pardon the jargon):

Pathway diagrams, which are deeply rooted in this kind of thinking, are actually very useful for certain kinds of diseases. There are etiologies which in fact do look like this, and where a clear causal chain exists, mechanistic biomarkers can and do contribute substantially to program advancement:

Unfortunately, not all biological mechanisms can be described in terms of deterministic biochemistry. Some diseases, such as cancer, have a strong stochastic component due to ongoing somatic evolution. Other diseases, such as Lupus and COPD are polygenic and may in fact be clusters of related pathologies with differing etiologies.

For this and other reasons, the mechanistic hypothesis that a program is working with can end up being completely wrong. (One can make the case that a mechanism of action is more reliable when defined at the level of cell biology or pathophysiology than when defined at the level of biochemical pathways, but this is a conversation for another day!) Working with the wrong mechanistic hypothesis (even if the team is aware that the hypothesis is wrong and is only talking it up in public for ‘window dressing’ purposes) can profoundly handicap a program.

The amyloid hypothesis in Alzheimer’s and the oncogene addiction hypothesis in cancer are two prominent examples of the wrong mechanistic paradigm leading to low success rates within a therapeutic area. (Interestingly, each of these paradigms still have their proponents, and you can still find new companies cropping up every year based on the original hypothesis as if there weren’t in fact decades of failure preceding them. Dragon’s lairs have a way of attracting optimists.)

Mechanism of action (MoA) can be an invaluable tool in drug discovery and development, but first-in-class therapies (especially for diseases with unmet medical need) operate under conditions of uncertainty when it comes to the mechanism. For this reason, the MoA needs to be treated as a hypothesis to be tested in the clinic, rather than as a roadmap for the program. (This is a big topic, and if you want to dive deeper into the rabbit hole of mechanism in drug development, check out this article that I wrote.)


?Rigid Empiricism Is Often A Road To Nowhere

At the other end of the spectrum, those with formal training in medicine can zero in on clinical “data”. An excessive, knee-jerk focus on patient response can quickly lead a program into a swamp of uncertainty.

On the surface of it, the idea of “following the data” has intuitive appeal. After all, clinical data is the only thing that really matters for approval. Companies and projects that face financial pressures (or a pressure to ‘focus’) will often rush a molecule into the clinic, with a mindset of “Why not put it in patients and just see what happens?”

Usually, though, this empirical approach ends badly. A typical Phase I or Phase II outcome is that some patients show signs that can be interpreted as response at the Maximum Tolerated Dose (MTD), while most don’t. There are two big problems with this outcome.

The first is, in the absence of a comparator arm, it’s hard to sort “efficacy signals” from noise. In Oncology, for example, many cancers are fairly slow-moving, and some proportion of patients can show stable disease without it necessarily being attributable to the treatment. In the CNS therapeutic area, placebo effects can sometimes be very large, and some diseases lack quantitatively defined endpoints.

The second problem is that when things go wrong, it’s very hard to figure out how to fix them. When a team is faced with disappointing clinical results, a trial design that was based on pure empiricism offers few clues about how to achieve a better outcome. The data that would allow the team to deconstruct the outcome and ask what-if questions around dose, schedule or patient population doesn’t just show up, it has to be planned for at the outset.

Jumping straight ahead to “let’s just see what happens” betrays a lack of understanding of the philosophical framework of experiments, which – at its heart – is what a clinical trial is. An experiment that lacks a hypothesis or controls is not really an experiment, it’s just a roll of the dice! It's a form of magical thinking (gambler's fallacy) to expect luck to be on your side specifically, and many drug developers pay the price for it. In fact, luck is not on your side, and the statistical properties of clinical trials make it risky to advance programs through clinical development based on empirical results alone.

Companies that rush into a clinical trial hoping to let the “data” guide them quickly find that a trial that’s not set up to learn anything usually teaches you exactly what it was set up to learn.

?

The Epistemology Of Drug Discovery

Drug discovery, especially for a novel target or a disease with no existing treatments, is at its heart an epistemic challenge. That statement might sound a bit esoteric, but the difference between success and failure in novel drug discovery comes down to epistemology, the study of knowledge—how we acquire it, validate it, and understand its limits. Careful attention to epistemology (“how do we know what we know”) is critical for decision-making in this setting.

Learning to Identify Epistemic Gaps

When translating a discovery from preclinical studies to humans, the critical question is: What information are we missing? This step is essential because the assumptions we make about the relationship between preclinical and clinical data often determine the design choices made by a program.

For instance, suppose your candidate drug demonstrates efficacy in mice. That’s exciting, right? Okay, but now you have several design choices to make going forward:

  • How should the drug be dosed in humans? How often, how much?
  • What route of administration will optimize its effectiveness?
  • What is the right patient population or indication for the drug?

In the worst case, project teams will play animal-model-roulette until they find preclinical data that they like. Then, locked in by the need to be ‘consistent’, they will use the same schedule that was efficacious in mice in humans.

But of course, mice are not tiny humans with tails. At a minimum, the pharmacokinetics (PK) between the two species is different, so a twice-weekly schedule in mice could look very different from the same schedule in humans. And it only gets worse from there. For many diseases, efficacy in preclinical models does not translate to humans. This has been well demonstrated in many CNS diseases (Alzheimer’s and schizophrenia, are two well-known examples), for immunology and immuno-oncology. (Not all animal models lack predictive value – mouse xenografts have been repeatedly shown to be highly predictive for outcome with traditional cancer therapies, for example). In general, the problems with animal efficacy models are not controversial in the drug discovery and development community. Even so, when you dig into the details of a drug discovery and development program, you will often find that the justification for the clinical dose schedule was based on the animal efficacy model. While animal model data can be helpful for building excitement for a program with investors and clinicians, using it in a translational context can be a double-edged sword. Drawing the wrong inferences from animal data can quickly derail a program.

Bridging the gaps with model-based drug development

So how, then, does one go about addressing dose schedule questions from a translational perspective? In simple terms, it comes back to the epistemic gap. What do you know about the program? This should determine your translational strategy.

The table below provides some scenarios that are worth considering– the list is not exhaustive, and there’s a lot more to each strategy than can be explained in one sentence. That said, this is the Cliff’s notes version of how to use what is known to make the right decisions for your program:


As this table shows, leveraging preclinical or clinical PK/PD, PK/efficacy and PK/toxicity models can help bridge the epistemic gap, and ensure that the right decisions are made at all times for your program. One thing that should leap out is that (almost) every situation has an appropriate strategy for making the right decisions in the face of epistemic gaps.

Often, teams will face pressure to paper over the epistemic gaps with false certainty. This false certainty can extend to overselling animal model data or the understanding of the MoA. There is a belief sometimes that oversold animal data or an aspirational MoA is harmless, as it serves as “window dressing” that may build enthusiasm for the program.

As mentioned previously, window dressing sets up programs for failure, because it places a false mental picture in the heads of the teams and stakeholders about the rational path forward. For example, if you are running a drug discovery program for an antibody-drug conjugate (ADC) that targets a receptor kinase that’s overexpressed in cancer, and you sell the idea that the overexpressed gene is an “addictive oncogene”, this may make your ADC look more appealing because you now have two mechanisms (the cytotoxic warhead and the addictive oncogene) instead of one. You can spin a good story about overcoming resistance in oncogene-addicted tumors with the ADC, and things will be good for a while. The problem arises when the link between target expression and tumor response turns out (as it always does) to not follow the aspirational mechanism, or when combinations based on the expected MoA fall short due to tolerability issues. At this point, pivoting to a simpler story (ADC binds cell surface target, delivering cytotoxic payload) may be challenging for the program. To the extent that there’s a disconnect between the window dressing and the reality, this disconnect will usually widen over time, making the marginal utility of the window dressing progressively more negative.

Working with a clearly identified epistemic gap makes life a lot easier. Having a quantitative understanding of your PK, relying on one or more downstream and causally relevant mechanistic biomarkers (if needed), and working with disease and toxicity outcomes that are objective and graded – all of these things can simplify decision-making during new drug discovery and development.

Understanding the shape of the dose-response curves tells you a lot about the drug’s behavior in the clinic. The relationship between dose and PK, PK and PD, and PD and efficacy contains critical information. Systematically modeling these relationships helps predict clinical outcomes and refine decision-making at every step. What do we expect to achieve at a given dose? Are these expectations realistic, and what do they imply about next steps?

A lot of this comes down to being able to map what you know about the system into the clinical context when appropriate. (Or developing a clinical framework to extract this information from the trial data). A hybrid approach is to use the preclinical data in a formal way, as a Bayesian prior, to efficiently bridge the epistemic gap. (We talk about this at more length in a recent white paper of ours).

There’s much more to say about this topic, as it’s a critical one for getting drug development right (not just in the first-in-class setting!). See these articles from us that touch on related concepts and keep an eye out for upcoming articles on this topic. (Or feel free to reach out to us, if you’d like to discuss more!)

Bridging the epistemic gaps during drug development has another important practical consequence – it makes it easier to interpret negative outcomes. This ability to “fail gracefully” is critical for drug development programs, especially in small companies that often lack a Plan B if their pivotal trial fails.


The Importance of Failing Gracefully

Most drug discovery and development programs fail, but all failures are not equal. Trial failure does not always lead to program failure – if the reason for failure can be pinpointed, a follow-on trial can sometimes rescue a program.

Even a failed clinical asset isn’t worthless. History is filled with examples of drugs that were repurposed or resurrected based on careful post-failure analysis:

  • Gleevec (imatinib): Failed in its early trials as a c-KIT kinase inhibitor but was ultimately developed as a BCR-ABL kinase inhibitor that revolutionized the treatment of chronic myeloid leukemia.
  • Viagra (sildenafil): Originally failed as a treatment for cardiovascular conditions, it found a highly lucrative application in treating erectile dysfunction due to a side effect observed during trials.

These outcomes were possible because the failures produced useful information. A well-designed trial (or program) makes failure interpretable and potentially salvageable. Out-licensing a failed clinical asset can make the difference between (minor) financial success and failure.

While luck played a significant role in the two examples above, fortune (in the words of Louis Pasteur) favors the prepared mind.

Building in a pharmacological audit trail to your trial greatly increases the chances of failing gracefully

?

Designing a trial in terms of the pharmacological audit trail, linking PK to PD to efficacy (and toxicity), allows you to treat a trial failure like plumbing. Say you’re developing an oral anticancer treatment that failed in a Phase I/II all-comers trial because there were no responses at the maximum tolerated dose (MTD). Being able to answer the following questions will make a follow-on trial worth pursuing. Did the drug fail because:

  • the bioavailability was too low/ too variable?
  • insufficient PD (target occupancy, say) was achieved at the MTD?
  • target occupancy was sufficient, but none of the tumors responded?

The answers to the questions above can help a team decide whether to reformulate the drug and try again, attempt to stratify the patient population, or terminate the program. At a minimum, this approach lets you understand if the program failed because of the drug molecule or because of choice of target, a critically important question for a first-in-class molecule (and its follow-on program). To use a software analogy, you can’t debug a (computer) program if you don’t know why it’s failing – having a program fail silently is a sign of bad code design. (To learn more about how to leverage PK/PD approaches at every step along the way, take a look at this recent LinkedIn article of ours.) Building in the pharmacological audit trail allows teams to pinpoint the root cause of failure, informing better decisions in subsequent development. Moreover, a clear, data-driven explanation of failure can increase the asset’s value for potential out-licensing or resale.

?

Ten Simple Rules to Succeed With First-In-Class Drugs

  1. Begin with the end in mind. Define success criteria early and build your development plan around them.
  2. Focus on what is unknown. Identify and systematically bridge epistemic gaps for the disease, target, and mechanism.
  3. Focus the risk onto one major gamble. Not two. Avoid compounding uncertainties by combining novel modalities with novel mechanisms.
  4. Invest in a causal understanding of the MoA-or don't. If the MoA is clear, test and leverage it. If causality cannot be demonstrated, adopt a pragmatic framework to advance the program.
  5. Choose your endpoints wisely. Favor functional endpoints with clear links to efficacy. Avoid molecular biomarkers that lack demonstrated causal links to mechanism.
  6. Design preclinical strategies to inform clinical decisions. Ensure preclinical insights are translationally relevant. If the animal model is not predictive, use a causal biomarker. If that’s not available, build a disease progress model for the clinic.
  7. Use modern techniques to bridge epistemic gaps. Powerful techniques such as Bayesian frameworks, population PK/PD modeling, and disease progress models can be used to fill in the pharmacological audit trail.
  8. Make intentional design choices. Every decision should be hypothesis-driven and data-informed.Avoid defaults; consciously choose methods, models, and endpoints.
  9. Set up "kill shots" at each stage. Define clear criteria for go/no-go decisions to avoid sunk-cost fallacies.
  10. If you must fail, fail gracefully. Design trials to yield actionable data even in failure scenarios.



Into the Dragon’s Lair: Improving Your Chances With First-In-Class Molecules

?First-in-class programs can be intimidating, and for good reason. Every first-in-class program brings with it an irreducible nub of risk: “will modulating the drug target even yield a meaningful therapeutic effect?”

This risk is the dragon in the cave – there’s not much that can be done to eliminate it, and a team going up against it is not guaranteed a favorable outcome. An excessive focus on mechanism (especially in the light of epistemic uncertainty) or a rigidly empirical approach (“let’s just see what happens in the clinic”) can quickly lead a program to a sticky end.

On the other hand, paying careful attention to the epistemic gaps and building a pharmacological audit trail to understand what is – and isn’t? – known about the drug’s activity in patients helps tilt the odds in favor of success. Thinking through the discovery and development strategy systematically and designing it around model-based principles gives your program the best possible chance of success. Of course, in this business, failure is sometimes a fact of life, but even there, model-based approaches help you move forward by helping you fail gracefully.

At Fractal Therapeutics, we are committed to helping our partners work through the challenges of first-in-class discovery and development, taking the approach of scientists with a measured and rational set of hypothesis tests, rather than that of adventurers who risk it all on an all-or-nothing gamble.

There’s a lot more that can be said about this topic when it comes to the specifics of program strategy, and we’re more than happy to have that discussion with you, so feel free to reach out if you’d like to learn more!


-Arijit Chakravarty and Madison Stoddard

?

要查看或添加评论,请登录

Arijit Chakravarty的更多文章

社区洞察

其他会员也浏览了