The Translational Divide and Digital Biomarker Validation Part 1: Why Preclinical and Clinical Research Must Align
Szczepan B.
Pioneering AI & Digital Measures Synergies for Human and Veterinary Healthcare ___________________________________________________________ Shaping Regulatory Frameworks for Next-Generation Technologies
Introduction to the Problem & the Validation Framework
Translational research often feels like a tale of two worlds. On one side, preclinical scientists nurture animal models in the lab; on the other, clinical researchers grapple with human data. Each side speaks its own dialect of “science,” and far too often, they don't compare notes as much as they should. Enter two key players striving to bridge this gap: the The 3Rs Collaborative (3RsC) ’s Translational Digital Biomarkers Initiative (TDB) and the Digital Medicine Society (DiMe) . If you’re a clinical researcher, you probably know DiMe’s work like the back of your hand, but maybe you’ve never heard of the 3Rs TDB initiative. And if you’re a preclinical researcher, DiMe might sound like something out of a sci-fi novel (spoiler: it’s not, but their tech timeline is impressive). Don't worry – by the end of this post, we’ll have everyone on the same page (or at least in the same chapter).?
In this blog, we’ll dive into a newly published validation framework for in vivo digital measures developed by the 3Rs Collaborative TDB initiative, which cleverly builds on DiMe’s established “V3” framework – that’s Verification, Analytical Validation, and Clinical Validation, for the uninitiated. We’ll explore why it’s a big deal that preclinical and clinical folks are (finally) learning from each other’s playbooks, and why forward and reverse translation aren’t just catchy buzzwords but crucial strategies in digital biomarker validation (Application of Machine Learning in Translational Medicine: Current Status and Future Opportunities - PMC). Along the way, we’ll highlight how cross-disciplinary collaboration can turn translational chasms into mere potholes, and where else in science we might apply this two-way learning approach (hint: AI and digital measures have a lot of unexplored territory). Don’t worry if some of these terms sound complex – we’ll break down the science as we go. And yes, we promise to keep it engaging – maybe even sprinkle in a dash of humor – because who said rigorous science can’t be a tad enjoyable??
Grab your virtual lab coat and let’s bridge some gaps!?
Meet the 3Rs Collaborative TDB Initiative and DiMe?
First, a quick introduction to our two protagonists:?
Chances are, clinical researchers have been following DiMe’s work on digital endpoints and regulatory discussions closely, whereas preclinical researchers may be scratching their heads about what V3 is. Conversely, those in preclinical circles might know all about the 3Rs principles and digital home-cage monitoring for lab animals ( Emerging Role of Translational Digital Biomarkers Within Home Cage Monitoring Technologies in Preclinical Drug Discovery and Development - PubMed ), while many clinicians just learned that “3Rs” isn’t referring to the elementary school basics of reading, ’riting, and ’rithmatic. Clearly, there’s some educational cross-over needed – and that’s exactly the point of this post.?
?
The V3 Framework Refresher: Verification, Analytical Validation, Clinical Validation?
Before we jump into the new framework for in vivo (animal) digital measures, let’s do a quick refresher on DiMe’s V3 framework, since it lays the foundation. The V3 framework was introduced to bring order to the Wild West of digital health tools by breaking validation down into three components:?
DiMe’s V3 framework was heavily inspired by existing FDA guidelines for validating bioanalytical methods. In other words, it borrows the concept that just like you validate a blood test for accuracy and relevance, you should do the same for your Fitbit or AI algorithm before using it in a clinical trial. The V3 framework has been a hit in the digital medicine community because it gives structure to what can otherwise be fuzzy – it tells everyone (developers, researchers, regulators) what evidence to collect to trust a digital measure.?
Now, keep these three pillars in mind – verification, analytical validation, clinical validation – as we turn to the preclinical world. Spoiler: the new framework from 3RsC’s TDB initiative takes these and gives them a translational twist.?
From Clinic to Cage: A New Validation Framework for In Vivo Digital Measures?
The 3Rs Collaborative TDB team, in partnership with a precompetitive group called the Digital In Vivo Alliance (DIVA; a?collection?of?organizations?and?scientists?with a?shared interest?in?leveraging?the?power?of?digital measures?to?maximize?the?therapeutic value or?impact of in vivo research and advance our understanding of health and disease), recently adapted DiMe’s V3 framework for use in animal studies. Our “in vivo V3” framework just made its debut in a 2025 publication (?Validation framework for in vivo digital measures - PubMed?), and it aims to ensure that digital measures collected from animal models are just as rigorously validated as those from human trials. Let’s unpack what that means and why it’s exciting (yes, even if you’re not usually thrilled by validation studies – I promise this matters).?
Building on a Solid Foundation: Adopting a clinical framework for preclinical use is useful for several reasons. First, it creates a common language. By using the same general concepts (verification, analytical validation, “clinical” validation), it immediately gives preclinical researchers a blueprint that clinical folks and regulators understand ( Validation framework for in vivo digital measures - PubMed). It’s like using the metric system in a country full of rulers and yardsticks – suddenly everyone knows what a “meter” is. Here, everyone knows what kind of evidence counts as good validation.?
Tailoring to Animal Context: However, an animal isn’t a human, and running a digital sensor in a mouse’s home-cage isn’t identical to giving a Fitbit to a human volunteer. The 3RsC framework tweaks the V3 approach to account for these differences. For example, verifying digital measures in a rodent cage requires considering factors like temperature fluctuations, bedding interference with camera-based tracking, or potential signal disruption in electromagnetic field (EMF) monitoring systems—challenges unique to preclinical environments where wearables aren’t an option. The in vivo V3 framework emphasizes things like extra-rugged verification for variable lab environments and ensuring algorithms still perform when a mouse decides to play hide-and-seek in its bedding ( Validation framework for in vivo digital measures - PubMed?). Similarly, the “clinical validation” piece in animals might be better termed biological validation – showing that a digital measure reflects a relevant biological state in the animal (like disease progression or response to a treatment). After all, lab mice don’t report symptoms the way patients do, so we rely on biological signals like behavior or physiology.?
Concretely, the new framework still has the three familiar stages:?
One key distinction noted by us, the authors, is scope: the in vivo framework zeroes in on translational relevance rather than immediate clinical use ( Validation framework for in vivo digital measures - PubMed). The ultimate goal here isn’t to get a digital measure FDA-approved for doctors next year, but to strengthen the bridge between animal data and human outcomes. By proving a digital biomarker in rodents is solid and meaningful, we improve confidence that when that biomarker is observed in animals it has a better chance of translating to human biology (?Validation framework for in vivo digital measures - PubMed?). In short, it tightens the connection between the lab bench and the bedside.?
The creation of this framework itself is a testament to cross-disciplinary teamwork. The initiative roped in experts from academia, pharma companies, biotech, and technology providers (?Validation framework for in vivo digital measures - PubMed?). Picture a virtual table where a neuroscientist, an engineer, a pharmacologist, and a regulator all sit down and hash out what “good validation” looks like for a mouse activity tracker. Not your everyday coffee chat, but the outcome is a robust guideline that everyone can use going forward. And for those wondering, yes, this framework is freely available (open-access publication) and is already sparking conversations about how to implement it in ongoing research.?
Forward and Reverse Translation: Why Two-Way Learning Matters?
Now, let’s talk about a phrase that’s central to our story: forward and reverse translation. In the simplest terms, #forwardtranslation is taking insights from the lab (preclinical research) and applying them to the clinic, while #reversetranslation means taking observations from clinical research (in patients) and using them to inform lab studies. Easy enough, right? In practice, though, it’s more like a ping-pong match of knowledge – and historically, we haven’t been playing that match nearly as often or as well as we could.?
Why does it matter? Because without reverse translation, we risk missing critical lessons. For example, say a clinical trial in humans finds that a particular digital biomarker (like step count from a wearable) predicts which patients develop a complication. Reverse translation would ask: can we see a similar pattern in our animal models, and if not, what are our models missing? Maybe our lab rodents aren’t moving enough to mirror human step counts, or maybe we need a different way to capture analogous behavior in the cage. Taking that human insight back to the lab could lead to refining the animal model or the measurement technique. Conversely, forward translation ensures that when preclinical scientists develop a clever new digital readout (say, a measure of sleep fragmentation in mice), the clinical side hears about it and considers measuring something similar in human studies if it could be relevant.?
The challenge is that preclinical and clinical researchers often work in silos. As one commentary noted, we have to consciously break down these silos so that “preclinical animal work and human clinical data work together to fill in scientific gaps” (Genentech: Reverse Translation). It’s a bit like the left hand and right hand of science needing to high-five more often. In the drug development world, it’s well recognized that a lot of treatments look promising in animal studies but then fail in human trials. Sometimes that’s because the animal data weren’t truly predictive of human outcomes – a forward translation issue. Other times, there were clues in human data that weren’t pursued back in the animal studies – a missed reverse translation opportunity (Translational research: Bridging the gap between preclinical and clinical research - PMC). Cyclical learning, where each informs the other iteratively, is now touted as a cornerstone of precision medicine (?Application of Machine Learning in Translational Medicine: Current Status and Future Opportunities - PMC?).?
In digital biomarker validation, forward and reverse translation are critical. The new in vivo V3 framework explicitly strengthens forward translation by aligning how we validate animal digital measures with clinical expectations (?Validation framework for in vivo digital measures - PubMed?). It also encourages reverse translation by highlighting the biological relevance of those measures – essentially asking, “if this digital biomarker is important in humans, how do we make sure we see it (or its analog) in animals?” For instance, if continuous glucose monitors revolutionize diabetes care in humans with rich glucose variability data, reverse translation might inspire using continuous glucose monitoring in diabetic rat models to better mimic human glycemic patterns and to validate that technology in the animal context too.?
The take-home message: bidirectional learning is not just a feel-good concept; it’s a strategy to de-risk drug development and accelerate innovation. By learning from both sides, we can design better experiments, choose better endpoints, and ultimately make research more predictive and relevant. It’s science’s equivalent of having Google Translate running between two foreign languages – less getting lost in translation.?
What happens when preclinical and clinical researchers actually collaborate? A smarter, more translatable approach to digital biomarker validation. In Part 1, we explored the problem—now, let’s talk about the solution. From AI-driven pathology to digital behavioral biomarkers, cross-disciplinary collaboration is the key to unlocking forward and reverse translation. If we align validation efforts, we can accelerate drug development and improve regulatory acceptance of digital measures. How we get there will be addressed in Part 2;?Cross-Disciplinary Collaboration & Future Applications in AI and Digital Measures.?
For digital translations to occur across the various tools, applications, systems, and data sets captured, the critical aspect of a "common language" must be foundational. Many in bioinformatics are leveraging ontologies and semantic knowledge graph capabilities in transforming how this information moves from data to knowledge with provenance. FAIR principle adoption and maturity is disciplined process that will require tremendous collaboration and open-mindedness to data-centric approaches. Interestingly, the forward and reverse translations are inherent in semantic web technologies and deliver machine actionable capabilities with R&D processes. AstraZeneca's Ben Gardner, elaborates on the process of enabling scientists to leverage information for accelerated trustworthy decisioning, https://www.youtube.com/watch?v=NQhUlWFZ1OU&t=79s Enjoy!
Senior Director at Tufts University
1 周Szczepan--Very well written and informative. Thank you for bridging the gaps between preclinical and clinical research and discussing 3RsC's, TDB, and DiMe (which was new to me). Dzi?kuj?!