Can AI Really Make Healthcare More Human—and not be creepy?
Photo by author

Can AI Really Make Healthcare More Human—and not be creepy?

Within a few weeks’ time I read “How Algorithms Could Bring Empathy Back To Medicine,” “How AI Is Humanizing Healthcare,” and “Making Health Care Human Again.” Each came from somewhat varied publishing venues—Nature, Mission, and Fortune. I think such diversity in and of itself sends a clear message that while technology is cool, we are missing something in our experience of the provision of healthcare—both as doctors and as patients.

First off, full disclosure, I’m a longtime fan of Eric Topol, MD’s work. I had the chance to meet up with him at a South-By a few years back, having just finished enjoying his then new book, “The Patient Will See You Now.” I more recently read his piece on high-performance medicine in Nature Medicine and his great new book, “Deep Medicine,” both looking at what artificial intelligence can and cannot (yet) do in healthcare and medical practice. I like his informed, albeit ironic, outlook that something like machine learning can act to make healthcare more human.

It is also interesting to have learned from his exhaustive review, that Artificial Intelligence (AI) and Machine Learning (ML) while Jetson-like in our homes, and promising equal parts of being awesome/terrifying via the movies, it is a bit far from being a panacea in medicine.

Bottom-line and spoiler alert, AI is great for diagnostic imaging/radiology as it “sees” better than we can, and in data processing of medical histories and the voluminous evidence-based scientific, clinical literature, because it can remember and pattern-synthesize better and faster than us. Without question, neither of those are trivial successes, as it is a humanly impossible task to keep current with medical progress. But much of what’s what in AI and healthcare are just currently promising ideas making their way up the hype-scale, as many are not yet operational.

Tech to the Rescue

Many years ago, with my brand new license proudly mounted on my wall, I was actually a bit terrified in my first few years of post-doctoral practice that I would misdiagnose someone (Is it depression or an endocrine disorder...?) and thus mistreat them. In response to my anxiety and imposter feelings, I called upon my prior computer science training and did a mash-up differential diagnostics algorithm (that would certainly NOT qualify as being AI) to suss-out the probabilistic difference between a true positive psychiatric diagnosis and a mimicking physical one.

Even further back in time, 1955 to be precise, John McCarthy, coined the term “Artificial Intelligence” and four years later, Arthur Samuel described what we think of as machine learning. In a clinical realm, ELIZA was the world’s first psychotherapist chatbot in 1964, which was capable of passing the Turing Test. ELIZA was lightyears beyond what I wrote. But even with this decades long existence of AI and ML, and even clinical uses, it was not until 2016 that the term machine learning first made its appearance in the (arguably) top two American medical journals—The New England Journal of Medicine and the Journal of the American Medical Association.

Side-effects? What side-effects?

I’d like to think AI is agnostic, bias-free and data-centric, But in AI—just as with humans—you are what you eat. In one of my podcast episodes with Heather Dewey-Hagborg, we discussed the programmed-in biases of coders based on the availability of data/images’ subsequent impact on facial recognition based on what they built from. Topol noted Cathy O’Neil’s finding in her book Weapons of Math Destruction that “many of these models encoded human prejudice, misunderstanding, and bias.” Uh-oh.

So, it seems as with most procedures, medications and treatments, there may be an iatrogenic-bad that comes along with the therapeutic-good. This is particularly difficult to spot in considering the often “black box” aspect of AI algorithm development in everything, let alone medicine.

IBM’s Dr. Watson’s patients are not faring well either, as it was found to have “recommended ‘unsafe and incorrect’ cancer treatments.” And in our currently app-happy world, there may also be some worries. For example, Saeb and colleagues note that the standard approach to evaluate predictive algorithm accuracy is via cross-validation. However, not all are statistically meaningful. They noted that “…record-wise, cross-validation often massively overestimates the prediction accuracy of the algorithms… (and) …that this erroneous method is used by almost half of the retrieved studies… (to) predict clinical outcomes.” Uh-oh, again.

More than AI?

Not all innovation is based in the current gee-whiz aspects of AI and ML, it can be big data and good old linear regressions, risk adjusting, and probabilistic predictions.

My early clinical experiences started me on the journey to evidence-based practice (EBP) approaches to care with a focus on clinical outcomes predicated on Patient Reported Outcome Measures (PROMs)—rather than solely clinician opinion. I've long been a big fan of evidence-based medicine and evidence-based methods. The next evolution was to then develop guideline-informed approaches to care. Such methods demonstrate great utility, while still offering the freedom to tweak as need be to best fit patient presentation and idiopathic considerations of medical history and comorbidities.

Topol personally speaks of his need for “…a special, bespoke PT protocol…” due to his atypical situation vis-à-vis his knee replacement surgery and post-op rehabilitation. And thus highlighting, one of the snags with evidence-based practice’s dirty little secrets—one size (guideline) fits few.

In contrast, I feel like we are in a robustly developing clinical sphere of “practice-based evidence.” That is, we are now also learning via access to large databases established on “real-world” clinical presentation of patients and their care and the resultant outcomes of what happens when we leave the controls of clinical research and its concomitant exclusion criteria of subject selection.

I’ve previously noted that patient registries are a growing resource and an augmenting counterbalance to peer-reviewed journals. These registries may come from universities, professional guilds and associations, and even large practices. I previously wrote in a geek.ly article about the one I have developed that focuses on outpatient orthopedic rehabilitation cases.

There are others that have orthopedic surgical foci that include The University of Massachusetts Medical School’s Function and Outcomes Research for Comparative Effectiveness in Total Joint Replacement and Quality Improvement (FORCE-TJR), focused on total joint replacement practices or the North American Spine Society’s Spine Registry which is a “diagnosis-based clinical data registry that tracks patient care and outcomes.” The American Academy of Orthopedic Surgeons has developed the American Joint Replacement Registry. The American Association of Neurological Surgeons uses the Quality Outcomes Database to collect, analyze and report on nationwide clinical data from neurosurgical practices. And the list of registry developers is growing and now includes psychologistsphysical therapists and other healthcare providers.

Each registry application goes through a vetting process at ClinicalTrials.gov and if approved, is added to Agency for Healthcare Research and Quality's Registry of Patient Registries. Then clinicians and researchers have access to and can benefit from the clinical trials performed by other groups, or they have visibility into outcomes of certain interventions conducted in more "real-world" clinical settings. This also allows for research to be leveraged much more broadly than ever before and for clinicians and researchers to test hypotheses without incurring the time and expense of conducting primary research or doing their own data collection.

And as for the Future

Given my poor forecasting abilities when it comes to technology and medicine as I have previously written as a LinkedIn Influencer, perhaps a better approach would be to play a role in inventing the future rather than trying to predict it (a la Dennis Gabor’s observation). So, here are some of the projects afoot that we’re working on to help invent a better future in healthcare:

·        Developing ways to measure and report on Clinician Performance to ensure consistent, quality of care in all clinics, better manage staff and help tailor training needs and resource deployment,

  • Synthetic integration of Treatment Guidelines to be easily available to clinicians via the electronic medical record,
  • Measurement of Patient Reported Outcomes and use in optimizing Medicare reimbursements based on value of care provided, contracting with third party payers, contracting with self-insured employers, contracting with Unions and other guilds and associations, contributing to our National Outcomes Registry,
  • Scaling expansion of specialized treatment approaches along with clinical outcomes and concomitant economic savings aimed at combating over utilization of expensive procedures and opioids,
  • Use of Machine Learning and Artificial Intelligence to more specifically tailor “bespoke” treatment guidelines,
  • Expanding involvement in Bundled Payments and sophistication of managing risk and measuring clinical outcome performance,
  • Development of Risk Adjusted Treatment Outcomes that enhance all programs as well as argue for rational reimbursement from less sophisticated payers and create an empirical foundation for Valued Based Care,
  • Predicting Patient No shows and testing optimal responses for mitigating,
  • Expansion of apps for home exercise program deployment and monitoring,
  • Incorporating new Patient Reported Outcome measures for more specialty services (e.g., occupational therapy, women’s health, neurological conditions, etc.),
  • Exploration of Telehealth/Telerehab applications, and
  • Exploration of Virtual Reality in care provision. 

I’m also working with a group of colleagues to develop what may be the first integrative work-up that is inclusive of the psychological/emotional health aspects of a person along with their genetic and microbiome data as well. The findings would be interpreted and provide treatment and/or lifestyle recommendations, powered by a proprietary algorithm.

I do believe we are really moving towards personalized medicine, but perhaps not leading with the biological, but more so behavioral and social. The point is that AI does indeed hold the promise of making healthcare more human by making it easier to get back to being human in making the space for our humanity and empathy in the exam room.

 #         #         #

If you'd like to learn more or connect, please do at https://DrChrisStout.com. You can follow me on LinkedIn, or find my Tweets as well. And goodies and tools are available via https://ALifeInFull.org.

If you liked this article, you may also like:

How to Protect Yourself from Fad Science

Technology Trends in Healthcare and Medicine: Will 2019 Be Different?

Commoditization, Retailization and Something (Much) Worse in Medicine and Healthcare

Fits and Starts: Predicting the (Very) Near Future of Technology and Behavioral Healthcare

Why I think 2018 will (Finally) be the Tipping Point for Medicine and Technology

Healthcare Innovation: Are there really Medical Unicorns?

Can (or Should) We Guarantee Medical Outcomes?

A Cure for What Ails Healthcare's Benchmarking Ills?

Why Global Health Matters

Can A Blockchain Approach Cure Healthcare Security's Ills?

Why Medicine is Poised for a (Big) Change

Is This the Future of Medicine? (Part 5)

Bringing Evidence into Practice, In a Big Way (Part 4)

Can Big Data Make Medicine Better? (Part 3)

Building Better Healthcare (Part 2)

Is Technology the Cure for Medicine’s Ills? (Part 1)

Access to Healthcare is a US Problem, Too

by faith and needed

回复
Niamh Williams

ELearning Officer with Failte Ireland

5 年

This might interest you, Fiona Quigley

Micha? Urbanowicz, M.D.

?? Medical Consultant ?? Healthcare Marketing Expert

5 年

Good conclusions. #AI?definitely will not be the panaceum for every problem but I do believe that a physician supported by an algorithms can make a great team and also the technology can give back the power back to patients. As it's mentioned – we should keep an eye on what we're putting in.

Paula Ralph

Women's Health and Surgery Coach | Gynaecology | Hysterectomy | Birth Trauma | Endometriosis | Health | Pharmacist|

5 年

As long as AI doesn't forget compassion, emotion, and the little intricacies nuances of being human, I'm all for it.? The language must be built into AI in order to fully serve the patients.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了