Blending Biology and AI: Dr. Markus Gershater on the Future of Life Sciences

Blending Biology and AI: Dr. Markus Gershater on the Future of Life Sciences

I have not published in a while here due to a busy travel schedule lately; I visited Life Sciences Baltics 2023 in a beautiful city of Vilnius (Lithuania) -- as a journalist reporting for BiopharmaTrend and then participated in the annual life sciences industry event in Brussels (Belgium) delivering a keynote speech 'The Age of AI:?Shaping the DeepTech Future of Life Sciences'. Here is also my recent interview covering the advent of artificial intelligence in biotech: opportunities and challenges.

I am still processing a wealth of information from both events (literally more than a hundred deeptech companies in my notebook) and will be publishing some findings (shortly here, and in more details -- in my Substack newsletter).

In the meantime, here is my interview with Dr. Markus Gershater, Co-founder and Chief Science Officer of Synthace, a UK-based no-code platform, allowing for the design and execution of experiments, subsequently producing and analyzing structured data.

With AI's potential to revolutionize our approach to biological systems, it also highlights the need for a change in our scientific methods and thinking. As past technological shifts like electrification have taught us, simply adopting new technology isn't the end game. The true value emerges when technology is paired with new approaches and perspectives. In this interview, Dr. Gershater discusses a future where AI becomes an integral part of biology, not just an adjunct.

Andrii: Dr. Gershater, you've got your feet in both biochemistry and synthetic biology while navigating a fast-paced tech world. In your view, what's the most exciting promise that AI holds for biotech?

Markus: The promise is that, quite simply, AI will give us insights into biology that are currently impossible and that we can’t yet begin to imagine. Also exciting, but secondary to this, is how it will prompt changes in the way we work. The reason I say this is because my underlying belief here is that, right now, AI and biological research don’t yet fit together properly.

AI is a technology that fundamentally demands change from the people who want to use it, so for AI to have a fundamental impact on biology, we really have to change the way we approach the process of science in the first place. It seems to me that organizations and teams will have to adopt new mindsets, new processes, and new tooling.

There are some companies who, today, already exhibit many of the required characteristics of companies that are looking to the future in terms of how they think about the way we gather data about biological systems. Think of companies like Recursion and Insitro, that have built whole automated platforms around this. Fully digitized, they are built to systematically create a greater understanding of biological systems.

They give us a glimpse of what the future may look like: the routine generation of high-quality, large, varied, multidimensional data, in the full context of rich metadata. Data that provides the foundation for AI, and a step change in our ability to understand and work with biological systems.

?

Andrii: Of course, every silver lining has a cloud. What do you see as the biggest challenges in bringing AI into the world of bioengineering? How can the industry, Synthace included, best tackle these hurdles?

Markus: We recently ran some research that found a staggering 43% of R&D decision-makers have low confidence in the quality of their experiment data. This is concerning because it doesn’t just demand we improve our means of recording experiment data, it also demands we perform experiments that generate higher quality data in the first place. It follows that to understand this data correctly we also require a high level of granularity about how it was created: metadata about experimentation should be automatically collected as much as possible.

In the context of AI, this is a problem. The scope of possible uses for AI in biotech is massive and can be applied in a myriad of ways across every aspect of the value chain. Saying “we need to use AI” is like saying “we need to use electricity”: obvious and useless unless you talk specifics. Much more meaningful is “we need to apply large language models to improve the user interfaces for our complex equipment and methodologies,” or “we should use active learning to optimize the development of assays for early discovery.”

“We need to use AI” is in danger of being a kind of an empty call to arms, with no acknowledgment of all the change that will be needed to make the touted revolution come about. In the second industrial revolution, electricity was insufficient by itself to increase productivity. People needed to first realize that it offered a way of changing the way they worked. Factories no longer had to be arranged around massive drive-shafts powered by steam engines. Instead, they could be arranged into production lines. It was the combination of new technology (electrification) and new ways of working (production lines and separation of labor) that enabled the step-change in productivity.

For Synthace, our focus is firmly on the experiment itself. How can we gather, generate, and structure high-quality data for export into systems that are able to make more use of it than the frankly limited and limiting data available today. To continue the above analogy, how can we adapt the factory floor to make the best use of electricity?

?

Andrii: Speaking of challenges, there's no denying that the complexity of biological systems makes for a dizzying amount of data. What's your take on the best approach to handle this data overload, and where does AI come into the picture?

Markus: Biology's complexity emerges from the interactions of its simpler components, giving rise to unique properties and behaviors. These emergent features can't be reliably predicted from individual components, necessitating a comprehensive and interconnected dataset for a deeper understanding of biological systems.

Much of the big data produced in biology are multi-omic studies: highly detailed molecular snapshots of a system. But apart from genomic data, all of these readouts are highly dynamic: they change over time and in response to a multitude of stimuli. To truly understand a biological system, we must understand its dynamics as any number of factors change. We can’t just measure a lot of things, we have to measure them in the context of this multifactorial landscape, systematically running experiments that map the space, and allow AI to “see” what is going on.

Just sequencing something isn’t enough; we must also look at how it works, interacts, and reacts to different stimuli. In our pursuit of comprehending the intricacies of biological processes, it's clear that one-dimensional data alone won't lead us far along this investigative path. Ideally we’d have large, varied, dynamic, high-quality data enriched with as much experimental context as possible, such that future as-yet-unimagined AI-driven analyses can make as much use of today’s data as possible.

?

Andrii: Finally, the idea that AI might change our whole understanding of the universe is a bit of a head-spinner. Can you delve a bit deeper into that concept? How might AI transform the way we interact with everything from biological systems to the wider world around us?

Markus: The buzz around AI/ML is remarkably strong and, without a doubt, it will be transformational in bringing new insight to biology. But as I’ve said, we have yet to see the full realization of its potential. The work of biology and the data/metadata that it produces is difficult to represent in code and difficult to digitize. If we can’t do it, AI/ML remains a pipe dream that remains the preserve of “big tech.” The volume of data, and also the quality of data we can provide to those artificial intelligence and machine learning tools determines the likelihood of uncovering anything interesting.

Is there a way to enable and control the entire experiment lifecycle from end to end? Is there a way to enable multifactorial experimentation, sophisticated automation, and AI/ML with a single unifying standard? Is there a way to elevate the scientist so they can spend more time on what matters most, applying more of their individual talents to today’s most difficult problems with the full power of modern computing?

In the event that we are able to adapt in the right ways to the possibilities created by these tools, we may begin to map entire biological landscapes overnight, using the resulting data and metadata to predict future outcomes. There will likely come a time in this decade when AI can predict the best possible experiment design before we even step into the lab. Should this come to pass, the upshot will be scientific breakthroughs that defy belief by today’s standards.

---

Welcome to my newsletter, "Where Technology Meets Biology." I am sharing noteworthy news, trends, biotech startup picks, industry analyses, and interviews with pharma KOLs. Contact me for consulting or sponsorship opportunities here or at www.BiopharmaTrend.com. Shop world-class chemistry for drug discovery at www.enaminestore.com.

Enjoying the newsletter? Subscribe to become part of 10K+ readers here on LinkedIn. Please help us spread the word by sharing it with your colleagues and friends.

Also, consider?joining my Substack community?where we are exploring a lot more (4K+ industry professionals are eading it via email).

-- Andrii


Joseph Pareti

AI Consultant @ Joseph Pareti's AI Consulting Services | AI in CAE, HPC, Health Science

1 年

the experiments automation for drug development is addressed at the Argonne National Laboratory : https://docs.google.com/presentation/d/1_egdot2O67M8LuYw3s83YMpUwfKXEj6FIU7m3iw1p-A/edit?usp=sharing and I am not trying to minimize the issues raised by Dr Markus Gershater , as #ai for #biology is a reductionist approach. I was just looking into opensource databases including Enamine Ltd. and those in this Elsevier report https://docs.google.com/presentation/d/1nymDudnngvyA9P69Y01I-iYRhih3FNrRg_PJ5DLipxk/edit#slide=id.p We have a few billions compounds vs. 10**60 molecules that can be built out of 30 atoms

要查看或添加评论,请登录

Andrii Buvailo, Ph.D.的更多文章

社区洞察

其他会员也浏览了