Three reasons why Watson-based AI might fail
Image by Geralt at Pixabay

Three reasons why Watson-based AI might fail

The story of an IBM Watson asset purchase by Francisco Partners was covered in the news.

Perhaps not covered in this announcement are some of the reasons Watson-like-data-dependent AI might fail to provide the business windfalls anticipated.?

Bottom-line the data streams of short, episodic, clinic and hospital encounters remain insufficient to generate meaningful (business relevant) predictive insights of complex systems.

Three key reasons jump to? mind:

  1. Pre and Post conditions are too different too quickly
  2. High-yield data elements are missing
  3. Logic is not enough

No alt text provided for this image

Pre and Post conditions are too different too quickly

(Image by S_L at Pixabay)

Many stores recently stopped investing in on-site excess inventory. Extra items do not generate revenue and are considered a cost center. Better instead to fill shelves with items and refill on-demand as customers purchase them. Grocery loyalty cards track items at checkout for grocery chains and feed requirements for on-demand purchase orders to restock the shelves.?

This works well when yesterday’s customers behave the same way as will tomorrow’s customers. But when behavior changes on a dime across an entire customer base, standard AI has no way of predicting or preventing massive disruption of stock availability.

Empty Shelves are bad for business.

We can blame covid for reducing the available workforce to ship and stock items on shelves. The winter weather also had an impact on transport trucks. Perhaps most importantly, customer behavior has flipped in ways that eluded AI management. Take coffee for example; many of us used a coffee shop for the vast majority of our consumption and rarely brewed a cup at home. With Covid inspired home offices, we flipped our behavior almost overnight. Suddenly, a once in a quarter purchase of coffee beans needed to happen weekly and no one warned the Grocery AI. Life is messy. Pre and post conditions in complex systems can change on a dime. Just because we understood yesterday does not mean we’re able to predict tomorrow.

No alt text provided for this image

(Image by Gerd Altman at Pixabay)

High-yield data elements are missing. ?

GE wouldn’t manage jet engine performance with a 15 minute data gathering episode separated by weeks, months or years. Tesla can design for massive data connectivity to the mothership and uses these continuous streams to advance self-driving capabilities. Machines are easy to interrogate using sensor-rich technologies.?

Humans are less straightforward. Covid has highlighted our inability to effectively see around corners of human experience. Two years into the pandemic and as omicron sweeps across the nation, a recent post reminds us that we still have little ability to predict our range of vulnerability at any time. Years of 15-minute clinic visits plus intermittent hospitalizations full of medical imaging and prescription medications do little to uncover our relative risk when faced with SARS-CoV-2 infections. High-yield information, the kind that would help sort out which pockets of individuals are at higher risk and why remain insufficient. How do the social determinants of health impact outcomes? Does living at altitude help? Does a hyper-sanitized childhood matter? Answers to these questions are not available when high-yield elements are missing.

No alt text provided for this image

(Image by Dx21 at Pixabay)

Logic is not enough

George Gilder’s Life After Google: The Fall of Big Data and the Rise of the Blockchain Economy reminds us that by design AI is based on logic. Yet logic is not a universal theory, as not all things can be proven by reason alone.

A systems biology speaker once shared this example which relates to AI-enabled drug discovery processes.

We are discovering drugs in silico today. This means rather than extracting substances from plants and testing their effectiveness in petri dishes in a lab, we instead build models of expected biological targets on computer chips. Then we run a stream of digital therapeutic candidates past these targets in order to predict which of the candidates may exhibit preferred behavior in real life.?

Imagine that we need a drug to enter the nucleus of a cell in order to deliver the needed therapeutic result such as vaccine-mediated immunity. Experts can design a digital model of a nuclear port (our biological doorway into the nucleus of our cells where we store our DNA). Using this model, hundreds and thousands of potential digital candidates could be screened using AI for possible use in vaccine development. These models have practical limitations. Yellow Fever vaccine, available since 1937, predates computers and AI but remains one of our most effective vaccines. Allow me to repeat: this vaccine confers meaningful immunity across populations.

Imagine the surprise when the AI screening model of a Yellow Fever vaccine resulted in rejection! The AI kicked it out as unlikely to be effective. Clearly designing models based on three dimensional configurations and surface charges (positive, negative and neutral) plus a few other parameters are woefully insufficient to explain the in vivo (in real life) experience confirming the efficacy of Yellow Fever vaccine. If electrical charge logic and three dimensions is not enough what would n-dimensions look like and how could we get there?

These are not small problems. An IBM Watson timeline indicates that after years of effort and various successes, by 2018 the tide had turned. The algorithm could, in some cases, recommend “potentially harmful treatments”.

AI-based models need more data, better data and more timely data than medical encounters, imaging, laboratory testing, wearables and basically all web 2.0 approaches currently provide.?

How do we solve these issues??

We enable web3, financially incentivized, ongoing and permissioned high-yield data streams direct from complex free-living systems. Most importantly, we rely on the continuous creativity of human health expression across crowds to self-correct algorithms.

Indeed web3 supported new business models are proving to be the basis of our next inflection of extreme value creation. Crypto may be an early use case but healthcare has much to offer.

Stay tuned as the discussion develops in upcoming posts.

Comments are most welcomed as it will take a village to deliberately design our preferred future.

The collaboration between AI and human cognition cultivates augmented intelligence, and automation deepens our understanding. For more impactful dialogues concerning patient-caregiver symptoms, delve into the AI-powered healthcare assistance offered through https://www.doctronic.ai/ —a platform designed with meticulous attention for simple accessibility. #AIHealth #symptoms #patientcare

  • 该图片无替代文字
回复
Loy Lobo ?

Founder, Digital Health Strategist, Innovator & Leader. Independent Director/NED. Educator & Mentor. #innovation #technology #healthcare #lifesciences

2 年

A provocative piece, Brigitte Piniewski MD. I don't think any AI will be able to reliably predict when a system might flip. There will have to be a system dynamic logic structure for something like that to work, which in turn, might require many predictive models to work in concert. Even so, such models cannot deal with Black Swan events. Healthcare has an additional challenge to those already mentioned by Phil Wolff (regulation and vested interests), Rick Meider (infrastructure, integration), Richard Whitt and Johannes Ernst trust), and so many others in their insightful responses ... Health data has a signficant legacy of messy, unstructured, and often, context-free data. Using this as fodder for AI creates the risk of harm at a scale not previously possible. It seems like we are dealing with digital equivalent of nuclear power.

回复
Dave Jarecki

Professional writer, ghostwriter, storyteller

2 年

great insights, Brigitte Piniewski MD, particularly this point: "Bottom-line: the data streams of short, episodic, clinic and hospital encounters remain insufficient to generate meaningful (business relevant) predictive insights of complex systems."

回复
Philip Wolff

Chief of Staff | Product Strategist

2 年

1. So, data, right? There's massive US local/state hostility to giving patients access to or control over their clinical data. More than half of US states have laws that interfere with this and that allow or require limits data sharing to patients. I'm guessing instigated by those worried about medical liability and malpractice, about Pts second-guessing provider behavior, and about the economic value of controlling verifiable data for research. B. And as consumers buy/rent/use non-prescribed data collection devices (fitbits, oximeters, peletons, Apple watches, etc.) fusion of data from diverse and non-standard sources becomes challenging. Floods of data but lots of it not normalized. For now. ?. I'm not writing off Watson-style projects. Their failures and obstacles show gaps between theory and reality. And then they improve. The rate of improvement may be in fits and starts, but it keeps on coming. Δ. Monetizing my digital self in the breadth and density you anticipate feels like a human rights violation. Europe’s Data Governance Act (DGA)?introduces the role of an "Intermediary" that acts for patients about their data with fiduciary duties enforced by law. Richard Whitt, humans need trusted data fiduciaries to handle their data at scale and at speed.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了