The Trust Layer

The Trust Layer

Think about the information revolution of the internet age in three waves:

Wave 1 was about distribution - getting information from A to B became incredibly fast and cheap. Your home internet connection has doubled in speed every two years since 1983 without fail. The cost to move data has dropped >90% in a decade. A single researcher today can share more data in the time it takes to drink a cup of coffee than entire universities could exchange in a month in the 1980s, at a fraction of the cost.

Wave 2 was about creation - we cracked the code on monetizing attention through ads, which led to an explosion of publicly available content. When you combine that content explosion with ever-cheaper computing power, you get AI systems that can create information at unprecedented scale. But this creates two fundamental problems:

First, there's the curse of recursion: when AI models train on outputs from other AI models, they eventually collapse - like a game of telephone played at planetary scale.

Second, there's the curse of reflection: if you thought social media's misinformation was bad, welcome to the funhouse - where AI reflects and creates distorted realities at the speed of light.

Which brings us to Wave 3 - curation.

We're entering a world where information moves instantly and can be generated endlessly, but paradoxically, this abundance makes truth harder to find, not easier. The ability to trace information to its source, verify its authenticity, and understand its context isn't just a technical challenge - it's the foundation for scientific progress, AI development, and public trust in the decades ahead.

In the same way that financial transactions evolved from “did the money transfer?” to having complete audit trails of every penny's movement our relationship with scientific information needs to evolve from "here's some data" to a complete understanding of where it came from, how it was handled, and why we can trust it. Without this evolution, we risk building AI models on shifting sands and making scientific discoveries that can't be reproduced. The future belongs not to those who can generate the most information, but to those who can verify and validate it.

This is where things get really interesting, and frankly, why we do what we do. We're building what I like to call the "trust layer" for scientific data. It’s in our genes: in 1851, Emanuel Merck did something revolutionary - he wrote the first quality guarantee for a bottle of chemical reagents. Sounds simple, right? But it was transformative for science because it meant scientists could trust the building blocks of their research.

EMD Digital does this for data. Through our businesses - Syntropy, Athinia, and M-Trust - we're creating the infrastructure that lets scientists, clinicians, and manufacturers trace their data back to its source and verify its authenticity. Because once you can trust the data, you can finally unlock true collaboration. Scientists in Tokyo can work with clinical data from New York. Cancer researchers can combine insights across institutions. Material suppliers can work with chip manufacturers to solve complex yield problems. Each organization keeps control of their sensitive data, but can now safely collaborate in ways that were impossible before. Most importantly, patients can remain in full control of their data - their consent isn't just a form they sign once, it's a dynamic permission that follows their data wherever it goes. We're not just building trust - we're building bridges, with individual privacy and autonomy at the core.

This matters because the next decade of scientific progress isn't just about generating more data - it's about knowing which data you can trust. Whether you're developing new cancer treatments, improving semiconductor yields, or securing supply chains, you need to know your data is real, unaltered, and traceable.

The companies that win in the next decade won't just be the ones with the most data - they'll be the ones with the most trustworthy data. And that's exactly what we're enabling.

What’s at stake?

At the most personal level - imagine you're a patient getting CAR-T cell therapy for cancer. Your cells are extracted, genetically modified, and cultivated into a personalized treatment that only works for you. Mix-ups aren't just costly - they're potentially lethal. You need absolute certainty that the cells being put back into your body are yours, modified exactly as prescribed. This isn't about tracking a package - it's about protecting a human life through an incredibly complex, multi-step process.

Now zoom all the way out to the societal level. We're living through a crisis of trust in institutions. People aren't just questioning scientific findings - they're questioning the very process of how we arrive at scientific truth. And you know what? They're not entirely wrong to be skeptical. The reproducibility crisis is real. The pressure to publish is real. The challenge of verifying other researchers' work is real.

What these extremes share is the need for radical transparency in the scientific process. Whether it's tracking a single patient's cells or documenting every step of a breakthrough discovery, we need to be able to show - not just tell, but show - exactly how we got from A to B.

What we're building is the infrastructure that makes this transparency possible and practical. For personalized medicine, it means every sample, every modification, every transport is tracked and verified - no room for error. For public trust in science, it means anyone can trace a scientific claim back to its original data, see how that data was collected, how it was analyzed, what transformations were applied.

This isn't just about preventing mistakes or catching bad actors. It's about rebuilding trust through radical transparency. Because in both cases - whether it's a patient trusting a revolutionary new treatment or the public trusting scientific institutions - trust has to be earned through verification.

Building trust sounds simple, but the reality is staggeringly complex. We're talking about data scattered across thousands of institutions, locked in countless different systems, handled by teams spread across the globe. The entropy is massive, and it grows exponentially every day.

You can't just throw machines at this problem. Yes, AI can help, but human experts are the ultimate context engines. They understand the nuances, the edge cases, the "why" behind the data. And right now, most of our AI models are learning more from Reddit discussions about science than from actual scientific literature. Why? Because the real gold - the peer-reviewed research, the clinical data, the experimental results - is locked away behind paywalls or sitting on institutional hard drives.

This fragmentation isn't just inconvenient - it's existential. When you're training AI models, especially for scientific and clinical applications, we're facing a perfect storm of challenges. First, the best data is locked away - pristine clinical trial data, detailed patient outcomes, critical material data, precise experimental results - all siloed in institutional databases.

It gets worse. Even if you could access all that siloed data, it's fragmented across thousands of different systems, formats, and standards. Patient data in one hospital means something completely different in another. Lab results from one research center can't be easily compared to another. It's like trying to build a universal translator when everyone's speaking their own made-up language.

And here's the real kicker - in scientific and clinical AI, garbage in doesn't just mean garbage out. It means potentially dangerous, systematically biased, or just plain wrong outputs that could mislead researchers or harm patients. We're not talking about a chatbot giving a wrong movie recommendation - we're talking about models that could influence medical decisions or scientific directions. Even a small amount of bad training data can create devastating ripple effects, encoding fundamental misconceptions that propagate through the entire system.

This is why solving the fragmentation problem isn't just about convenience or efficiency - it's about whether we can trust the AI systems we're building to advance science and medicine. The stakes couldn't be higher.

But - and this is what gets me excited - if we can get the physics right... if we can establish that unbreakable chain of data lineage, provenance, and context... if we can create secure, trusted environments for collaboration... if we can think boldly about generating new, high-quality datasets at scale...

Then we're not just talking about better record-keeping. We're talking about building the foundation for AI models that could genuinely transform our understanding of life itself. Models that could help us move from traditional computing architectures to neuromorphic systems that process information more like the human brain.

This isn't just about making science a little faster or a little more reliable. This is about laying the groundwork for the next great leap in human knowledge. And it starts with something as fundamental as a layer of trust.

And here's the thing - while these challenges are immense, they're not insurmountable. In fact, we're actively working on solving them. Through our three businesses, we're attacking this problem from every angle:

Syntropy is already helping major cancer centers and research institutions break down data silos while maintaining absolute control over their sensitive data. We're not just talking about better sharing - we're enabling true collaboration while preserving privacy and consent - that's not science fiction, that's happening now.

Athinia? is doing the same thing in semiconductor manufacturing, where a single yield problem can cost millions. We're creating secure spaces where chip manufacturers and their suppliers can finally collaborate on the most sensitive process data, tracing issues back to their source without exposing trade secrets. When you can track how a subtle change in material composition affects chip yield three steps down the production line - that's the power of trusted data in action.

And M-Trust? That's where it gets really exciting. We've developed the technology to create an unbreakable chain of custody between physical objects and their digital representations. Think of it as a bridge between the atomic and the digital world. When you can prove, beyond any doubt, that a digital record perfectly represents its physical counterpart - whether that's a patient sample, a semiconductor wafer, or a crucial component - you've solved one of the fundamental challenges of the digital age.

The beautiful thing is how these solutions complement each other. The same principles that help track a patient sample through a complex CAR-T treatment process can help verify the provenance of semiconductor materials. The technologies that enable secure collaboration between cancer centers can help chip manufacturers work more closely with their suppliers. We're not just building point solutions - we're creating a new paradigm for trusted scientific collaboration.

This is how you rebuild trust in science - not through proclamations, but through verification. Not through centralization, but through secure, controlled collaboration. Not by ignoring the complexity of the real world, but by embracing it and building systems that can handle it.

Yes, the challenges are complex. Yes, the stakes are high. But we're already proving that they can be solved. Every day, we're helping scientists, clinicians, and manufacturers work together in ways that were impossible just a few years ago. We're not just dreaming about a future where scientific data can be trusted and traced - we're building it.

And that's what gets me out of bed every morning. Because when you can trust the data, you can trust the science. And when you can trust the science, anything is possible.

Ivan L. Vecerina

SVP Quality/Regulatory/Clinical/Operations, PRRC, MD, Dr. med. ; Medical Software and Robotics.

3 周

Outside of the scientific community, however, trust has little to do with information sourcing – it is all about society, and the will to educate people. Think of Covid, global warming, war, or vaccines: no effort is made to educate about facts, risk/benefit, or return on investment. Rightfully or not, our leaders and media assume that these aspects are too difficult for laypeople to comprehend. Instead, communication efforts are emotional, and target basic instincts and fears (of death, catastrophes, of others...). Because this is what works effectively. Meanwhile, talk about occasional failures or side effects are suppressed - instead of educating about statistics, uncertainties. We seem to have given up on educating citizens about complex realities. I wish the younger generations were taught more about humanities, inevitable tradeoffs, and the Milgram experiment; these were given much more importance, post-war, in the last century, when we felt we had to learn from recent mistakes. In our modern times, where the skepticism and critical thinking you mention is more valuable than ever, questioning is too often suppressed, rather than being encouraged – and when appropriate, confronted.

Clark Daggett

Independent school educator

3 周

Hi James, this is very impressive, thoughtful, optimistic, and compelling. Comimg from you that’s not surprising, but great to read and to know. I wish you unmitigated success! Also, I hope you are enjoying every minute!

André T. Nemat

MD PhD | Healthcare | Digital & Ethics |

4 周

Thank you James Kugler, very helpful and valuable insights. Maybe you should also take a look at the RAM of the International Data Spaces Association (IDSA) : the basis for data spaces as a standard for sovereign data exchange. #ehds

回复
Josef Zihlmann

#LabMarket #LifeScienceTools #LabInstruments #LabConsumables #LabServices #DrugDevelopment #bulk #MDx: Quarterly data, accurate modeling, funding, customers, persona, segmentation, trends, products, opportunities.

4 周

Beautiful work. Reminded me of the old conflict between believing (like in a god) and proofing. For those (majority?) who feel overwhelmed already by everyday complexity a soothing simple concept like a God or a supreme leader and believing and following becomes even more compelling. Blockchain and other great technology does not restore their trust as that would lead to too much effort and inconvenience for them, unfortunately. So, a small true elite becomes even more elitist and a fact sceptical, comfort hungry mass even more distrustful and even hostile.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了