An interview with Rafael Rosengarten, CEO of Genialis and Founding Member of the Alliance for Artificial Intelligence in Healthcare -Part 2 of 2

An interview with Rafael Rosengarten, CEO of Genialis and Founding Member of the Alliance for Artificial Intelligence in Healthcare -Part 2 of 2

“There's no area of healthcare that will not be touched by AI.” “Physicians or companies who do not have an AI strategy in place will get left behind by those who do.”  We've been hearing a lot of trite expressions lately.  Will Artificial Intelligence be taking on a much larger role in healthcare and diagnostics like it has in other aspects of our lives?  Dr Rafael Rosengarten, CEO of Genialis, a data science and drug discovery company, and founding member of the Alliance for Artificial Intelligence in Healthcare joins us.


…continued from Part 1


Rafael Rosengarten, PhD: The idea isn't to make doctors irrelevant - quite the contrary.  It is to return physicians to the job of treating patients and to take away a lot of the rote work.  And I think almost everyone in the industry would say that's really the goal.


Joseph Anderson, MD: For image analysis and, in particular, digital pathology, I think we kind of woke up to the fact that, “Hey wait a minute - these are just images.” When you go to Walmart, they have cameras scanning your face; they know everyone who's in the store.  Some of the first early entrants into image analysis came from the air forces of various countries, where they were very good at analyzing digital images from a high elevation, which is exactly what we do in pathology.  The overlap is fascinating.

Is AI merely a matter of “better-cheaper-faster” and more high-throughput or are we somehow going to transcend what was previously possible using traditional methods? For example, let's say a company spends millions and millions of dollars in a phase-three drug trial, which turns out to be a negative trial.  Then you say, “Wait a minute, there's got to be some patients in that trial who actually got the benefit from that drug.”  Are we going to be able to unlock things that previously we wouldn’t have been able to do?  


RR: What you're really asking about is, what are going to be the “zero to one” innovations.  What's possible now that wasn't before.  I think the answer is both. “Better -faster-cheaper” is important, especially in a space like clinical development, where the clinical trial failure rate is alarmingly high. It contributes mightily to the overall cost of bringing drugs to market.  And then, in principle, that impacts the cost of drugs, which is problematic.  The ability to run better clinical trials because you've used machine learning to better stratify patients or to be able to salvage a failed trial because you can do a retrospective analysis, I think those are all really useful things.  I would call those “large incremental improvements,” however. The “zero to one” stuff that I'm interested in would be, for example, the idea of doing complete patient-less placebo arms - control arms of clinical trials where you can have patient avatars.  There are a number of companies - big companies and also some really promising young small companies - working on that kind of problem.  That's something that you just can’t do with traditional pencil and paper statistics.


JA: With AI, the topic of ethics seems to emerge.  In the clinical sphere, in pathology and radiology, you might be faced with the conclusion drawn by AI, and then your job is to agree or disagree with it.  Is it ever ethical to disagree with the conclusions of AI?  This seems like a quixotic question.  Is it ever ethical to disagree with a colleague or another human being?  Of course, it is. But now, it seems, we may be entering a new realm.


RR: Maybe it's oversimplifying that particular scenario, but it seems to me that we should think of it just like getting a second opinion.  You go to the world's expert in whatever disease you've been diagnosed with.  She or he may give you certain prognosis. If you don't like what you hear, you're almost certainly going to go to the world's second leading expert and ask for their opinion. We know that at the cutting edge of cancer research, there are widely disparate opinions - both in terms of prognosis and treatment held by equally renowned experts.  I don't particularly see anything ethically dubious to question the output of an AI.

What I do think is a riskier Hornet's nest, is linking the output of an AI to some direct intervention where there isn't a second opinion by either a human being or another AI system that that may have been trained differently.  I'll give you an example that occurred not so long ago. There was a chatbot which someone could interface with to get their diagnosis or prognosis.  The story goes, that the chatbot essentially delivered a terminal diagnosis.  Here you have a machine delivering the news to someone that they are going to die.  That seems to me to be kind of egregious.  The one thing that I'm not sure AI machines really can do in the short run is empathy – which should be a major part of clinical care.  And what if it is wrong? Is it quick to discuss the options?  I think that the automation of what we act on from AI needs to be thought through very carefully, especially at the interface of actual patients.


JA: It is certainly going to open up a lot of medical-legal ramifications as well.  “Who made this decision?” “How did you derive this judgment or this treatment?”


?RR: Let me just comment on that a bit further.  I think the other thing we'll see is that AI systems in healthcare are going to be under a huge amount of scrutiny.  The analogy I'll give here is self-driving cars, which is also an AI based technology.  I don't know what the death by motor vehicle accident rate is in the world or just the traffic accident rate, but it's high.  And it’s largely based on humans and human factors you can't control.  People are bad at driving. They wreck a lot.  People die behind the wheel, and that's terrible.

But one self-driving car goes off the rails or there's an accident, and the entire industry gets a major black eye. Even if the overall safety rates outperform human drivers - because it's a new technology; because it's something we feel that we don't control.  In healthcare, we’re likely to see the same thing.  A misdiagnosis or a misappropriation of intervention by an AI is going to be damaging to the industry. 

  

JA: Another aspect of ethics that seems to be cropping up is a throwback to the 1950’s.  Science fiction writer, Isaac Asimov, wrote the “Laws of Robotics.” What if AI systems are truly able to achieve intelligence and function autonomously, how will we be able to contain that? What your thoughts on this?


RR: A really good question. What Asimov is referring to is something that is known in the AI academic fields as, “General Artificial Intelligence.” It is the notion that AI systems can learn how to do human things and then also can learn how to learn. This is why folks like Daphne Koller like to talk about “machine learning” rather than AI, just to avoid the risk of confusion. I don't know how realistic it is. Despite working in the technology industry, I tend to be a bit of a Luddite.  I approach a lot of technology with a healthy skepticism, so I'm not immediately worried that all the healthcare computers in the world will band together and come up with some sinister plot.

I do think that the more we rely on technological solutions, the more we have to concern ourselves with bad actors who may think about breaching those and misappropriating them or using them for nefarious purposes.  To the extent that an AI could become corrupted or fed a crappy training set or something, I think those are perfectly reasonable security risks that we have to concern ourselves with.


JA: Raphael Rosengarten, this is incredibly fascinating, and especially useful for those of us in diagnostics.  Can we talk a little bit about you and what you've done it Genialis?


RR: Sure, I'd be happy to. Genialis is a computational precision medicine company.  We're really focused on the question of how we can predict which patients are most likely to respond positively to therapy.  The flip side of that is, of course, can we predict adverse events as well.

The bulk of our work these days Is partnering with pharma companies in clinical development - trying to make clinical trials more effective and giving patients a fighting chance, by working towards clinical trial designs where the patients who were enrolled are, in fact, likely to benefit.

Clinical trials aren't just experiments meant to get a drug approved.  These are meant to be a way for patients to access the cutting edge of therapy.  It's a great time to be working in cancer therapeutics and CNS therapeutics and other diseases, because the number of new and high-potential therapeutic modalities is through the roof.  We have incredible agents that we can treat people with now.  Cell and gene therapy are no longer a thing of science fiction.  

But with all of these new modalities that are treating more complex biological systems rather than a single chemical entity hitting a single protein target, it's complicated.  Human biology is complicated. Genialis is not afraid of the dirty water of patient data.  In fact, those are the waters we like to swim in.  We’ve developed a lot of technology, some of which is machine learning based, for aggregating data and making sure those data are cleaned and processed appropriately.

That’s the most boring part, but it's also most important part - making sure you have high quality datasets going into any modeling effort. Then we build predictive models to try to predict patient outcomes for new therapeutic modalities.


JA: We’re definitely entering into a very exciting time. What do you think we have to look forward to in the next decade or so?  And then, let me throw you a curveball.  Could you give us the pessimistic scenario? What if we don't take advantage of all these new tools that we have available to us?  


RR: To me, the most pessimistic scenario is that some of these really exciting high potential therapeutic approaches and technology approaches hit some speed bumps - something goes wrong in limited cases, but those are sufficiently damaging to the effort, causing us to see more setbacks.  We've already seen this, for example, in the gene therapy space back in the 1990’s. There was a single patient death.  People who work in that space will tell you that the entire field was set back by ten years.  I think that the regulatory agencies and the different drug developers and researchers now have maybe a slightly more seasoned approach to risk.  Hopefully they can better understand the landscape and setbacks. This is where the technology can help us to be very thoughtful and very thorough in modeling out the potential for these new therapies.

We're I think will be in ten years… I think that “precision medicine” will just be called “medicine.”  Every cancer will be thought of as a rare disease.  What I mean by that is, we'll think of your cancer as yours.  It's not going to be the same as another patient’s with just a similar tumor from a similar tissue type.  One area that we haven't talked about, but that I’m really hopeful about is that we can figure out how to better use these technologies for both prevention and early detection.  In a disease like cancer, those are really the most effective ways to treat the disease.  


JA: Rafael Rosengarten, thank you so much for joining us.  How can people learn more about you and Genialis?


RR: Follow us on social media. You can find us on Linkedin and on Twitter, @Genialis.  My handle is @rafecooks. That's a throwback to when I actually used to be a line cook for a living before my career in science.

Rafael Rosengarten

CEO & cofounder, Genialis | Precision Oncology with RNA + AI: Reimagining Biomarkers for Every Target, Drug and Patient

4 年

When Joe re-upped this post yesterday, I was inspired to listen to several of his other episodes. I have to say, no bullsh*t, Joe is one of the best questioners in the biotech podcast arena. It's abundantly clear that he comes from a place of strong background knowledge, and thus can ask the really meaningful and thought provoking stuff.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了