Barriers to AI dissemination and implementation in medicine

Barriers to AI dissemination and implementation in medicine

AI, machine learning, neural networks and deep learning are the new, new thing. Applications in medicine are potentially vast and, as most things on the upside of the hype cycle, there is a proliferation of papers, conferences, webinars, and organizations trying to stay ahead of the curve. Doing so, however, means you are on the leading edge to the trough of disillusionment.

Here is a primer on AI, machine learning and deep learning.

Deep learning has hit a wall.

No alt text provided for this image

Here it is 5 years later.

A recent Change Healthcare?study ?found that artificial intelligence (AI) is driving a wide range of improvements in healthcare, but the approach is tactical and not end-to-end.?

As of 2021, Nine in ten hospitals now have an artificial intelligence strategy in place, and 75 percent of healthcare executives believe AI initiatives are more critical now because of the pandemic, according to a?report ?from Sage Growth Partners.

But most deployments are in the early stages.

The biggest barrier to the use of artificial intelligence in health care has to do with infrastructure.

.A living systematic review of AI models pertaining to covid-19, as of August 2021, identified 27 models for disease progression. Another systematic review, published in March 2021, evaluated 62 studies involving the use of chest computed tomography for prediction of covid-19 disease progression.?Both reviews found methodological flaws and risks of bias in almost all studies, attributing this, at least in part, to lack of coordination.

The authors of the article note that to achieve a sustainable impact, researchers into AI should look beyond model development and consider how solutions can be practically and ethically implemented at the bedside. This approach demands a broader perspective that ensures integration with hospital systems, satisfies ethical standards to safeguard patients, and adapts to existing workflows in a way that acknowledges and leverages clinical expertise. If AI researchers do not adapt their work to real world clinical contexts, they risk producing models that are irrelevant, infeasible, or irresponsible to implement.?

In the study,?Poised to Transform: AI in the Revenue Cycle, researchers measured healthcare executives’ familiarity with AI, discovered areas for improvement, and learned how the technology is used now and will be used in the future.

Specifically, they found that AI will transform the way doctors, hospitals, and healthcare systems collect and manage their revenue cycles,with 98 percent of healthcare leaders anticipating using AI in revenue cycle management (RCM) and 65 percent reporting that they currently use AI for RCM.

But, financial, security, and privacy concerns hinder AI adoption and decrease success factors.

Despite advances in computer technology and other parts of the 4th industrial revolution, there are many barriers to overcome before machine learning crosses the chasm. Here are some things you should know about dissemination and implementation , and innovation diffusion basics.

John Halamka points to what he calls the four grand challenges to AI adoption in healthcare:

  1. Gathering valuable novel data – such as GPS information from phones and other devices people?carry as well as wearable technology – and incorporating it into algorithms.
  2. Creating discovery at an institutional level so that everyone – including those without AI experience – feels empowered and engaged in algorithm development.
  3. Validating an algorithm to ensure, across organizations and geographies, that it’s fit for purpose as well as labeled appropriately as a product and for being described in academic literature.
  4. Workflow and delivery – getting information and advice to physicians instantly while they’re in front of patients.

There are four basic categories of barriers: 1) technical, 2) human factors, 3) environmental, including legal, regulatory, ethical, political, societal, and economic determinants and 4) business model barriers to entry.

TECHNICAL

A recent Deloitte report highlighted the technical barriers. Here are the vectors of progress:

No alt text provided for this image

Explainability, for example, is a barrier. Say one can predict the onset of Type 2 diabetes. It’s one thing to say that we think there’s a propensity but, typically the next question is “Why?” Most algorithms don’t, but we make sure that if we provide a prediction we can also answer those types of questions.

In a paper published in Science , researchers raise the prospect of “adversarial attacks” — manipulations that can change the behavior of A.I. systems using tiny pieces of digital data. By changing a few pixels on a lung scan, for instance, someone could fool an A.I. system into seeing an illness that is not really there, or not seeing one that is.

Software developers and regulators must consider such scenarios, as they build and evaluate A.I. technologies in the years to come, the authors argue. The concern is less that hackers might cause patients to be misdiagnosed, although that potential exists. More likely is that doctors, hospitals and other organizations could manipulate the A.I. in billing or insurance software in an effort to maximize the money coming their way.

Measuring and reporting results is another barrier since there are lies, damned lies and AI statistics.

Bias is a barrier.

Then, there is the problem of how and where to store all the data.


These authors suggest always discussing design choices and study assumptions with clinicians or other health-care providers who are knowledgeable of local protocols. Furthermore, causal inference frameworks?should be incorporated if studies aim to analyze or predict outcomes that result from treatment decisions (e.g., which patients should be offered renal replacement therapy). We have listed the identified pitfalls and outlined potential solutions. The pitfalls are structured according to the analysis stage in which they most likely occur.

Recent years have seen?several ?reports ?(PDF) warning of algorithms intended to support care delivery inadvertently driving race and class disparities. The U.S. health system’s history of inequity is a key roadblock

Here are some benefits, limits and risks of GPT-4 as an AI chatbot for medicine.

HUMAN FACTORS

Human factors, like how and whether doctors will use AI technologies, can be reduced to the ABCDEs of technology adoption. Research suggests the reasons more ideas from open innovation aren’t being adopted are political and cultural, not technical. Multiple gatekeepers, skepticism regarding anything?“not invented here,” ?and turf wars all hold back adoption.

Attitudes:?While the evidence may point one way, there is an attitude about whether the evidence pertains to a particular patient or is a reflection of a general bias against “cook book medicine”

Biased Behavior:?We’re all creatures of habit and they are hard to change. Particularly for surgeons, the switching costs of adopting a new technology and running the risk of exposure to complications, lawsuits and hassles simply isn’t worth the effort.

Cognition:?Doctors may be unaware of a changing standard, guideline or recommendation, given the enormous amount of information produced on a daily basis, or might have an incomplete understanding of the literature. Some may simply feel the guidelines are wrong or don not apply to a particular patient or clinical situation and just reject them outright.

Denial:?Doctors sometimes deny that their results are suboptimal and in need of improvement, based on “the last case”. More commonly, they are unwilling or unable to track short term and long term outcomes to see if their results conform to standards.

Emotions: Perhaps the strongest motivator, fear of reprisals or malpractice suits, greed driving the use of inappropriate technologies that drive revenue, the need for peer acceptance to “do what everyone else is doing” or ego driving the opposite need to be on the cutting edge and winning the medical technology arms race or create a perceived marketing competitive advantage.

In addition, medical schools, graduate medical education and graduate schools are not doing enough to train knowledge workers how to be more effective.

Patients are also pushing back. Healthcare consumers see AI-delivered healthcare as standardized and therefore neglectful of patients’ individual needs, which is one reason they tend to be less accepting of healthcare delivered by AI than that provided by humans.

ETHICS/LEGAL/REGULATORY/IP

Ethical and legal issues are major challenges to dissemination of AI.

The UK House of Lords Select Committee on Artificial Intelligence has asked the Law Commission to investigate whether UK law is "sufficient" when systems malfunction or cause harm to users.

The recommendation comes as part of a report by the 13-member committee on the "economic, ethical and social implications of advances in artificial intelligence".

One of the recommendations of the report is for a cross-sector AI Code to be established, which can be adopted nationally, and internationally. The Committee’s suggested five principles for?such a?code?are:

  1. Artificial intelligence should be developed for the common good and benefit of humanity.
  2. Artificial intelligence should operate on principles of intelligibility and fairness.
  3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  4. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

The Nuffield Council on Bioethics identified the ethical and societal issues as :

  1. Reliability and safety
  2. Transparency and accountability
  3. Data bias, fairness and equity
  4. Effects of patients
  5. Trust The future success of artificial intelligence may depend on convincing people that to learn from those mistakes isn’t necessarily just human either.
  6. Effects on healthcare professionals
  7. Data privacy and security
  8. Malicious use of AI

Finally. the parts of the environmental SWOT analysis are more wild cards in the game.

Here are the issues under discussion about patenting AI products and services.

Here are some other legal concerns.

In recent weeks, government bodies — including U.S. financial regulators, the U.S. Federal Trade Commission, and the European Commission — have announced guidelines or proposals for regulating artificial intelligence. Clearly, the regulation of AI is rapidly evolving. But rather than wait for more clarity on what laws and regulations will be implemented, companies can take actions now to prepare. That’s because there are three trends emerging from governments’ recent moves :

The first is the requirement to conduct assessments of?AI risks ?and to document how such risks have been minimized (and ideally, resolved).

The second trend is accountability and independence, which, at a high level, requires both that each AI system be tested for risks and that the data scientists, lawyers, and others evaluating the AI have different incentives than those of the frontline data scientists.?

The last trend is the need for continuous review of AI systems, even after impact assessments and independent reviews have taken place.

Here is an analysis of the many societal, ethical, and legal challenges of generative artificial intelligence like ChatGPT. GPT stands for generative pre-trained transformer, which is a program that can realistically write like a human.

Generative AI, which uses data lakes and question snippets to recover patterns and relationships, is becoming more prevalent in creative industries. However, the legal implications of using generative AI are still unclear, particularly in relation to copyright infringement, ownership of AI-generated works, and unlicensed content in training data.?

4) ENVIRONMENTAL AND BUSINESS MODEL BARRIERS TO ENTRY

Startup developers of commercial AI applications operate in a competitive market. They compete with the data available to them and meet a market need for AI applications for midsize companies, which, in turn, enables those companies to compete with larger companies that often develop AI applications internally.

Here is how AI vendors can overcome the business model issues.

Some healthcare organizations contend that it’s financial constraints that provide limitations. Lack of dollars makes it difficult for all but the most advanced and lucrative healthcare organizations to put machine learning or artificial intelligence in place to make the most of the data. There are many more practical barriers contributing to the AI divide.

A recent overview of AI challenges by the National Academy of Medicine highlights the issues , concluding that AI is poised to make transformative and disruptive advances in health care,?but it is prudent to balance the need for thoughtful, inclusive health care AI that plans for and actively manages and reduces potential unintended consequences, while not yielding to marketing hype and profit motives.?

Here are some regulatory and reimbursement issues.

AI dissemination and implementation faces some NASSSy hurdles (The acronym stands for Nonadoption, Abandonment?and Challenges to the Scale-up, Spread?and Sustainability?

The goal of applying artificial intelligence or augmented intelligence to medicine is to help stakeholders add value to outcomes, costs, access, experience and business processes. In other words, it should help us practice more intelligent medicine.

These researchers’ objective was to pinpoint factors helping AI’s cause in clinical radiology as well as those holding it back. They were:

Key facilitating factors:

  1. Pressure for cost containment throughout healthcare
  2. Elevated expectations of AI’s potential added value
  3. The presence of hospital-wide innovation strategies
  4. Presence of a “local champion”

Key hindering factors:

  1. Inconsistent technical performance of AI applications
  2. Unstructured implementation processes
  3. Uncertain added value for clinical practice of AI applications
  4. Large variance in acceptance and trust of direct (radiologists) and indirect (referring clinicians) adopters
  5. Demonstrating AI financial ROI and it's ability to reduce waste and improve administrative efficiency
  6. Reimbursement for using it

Pat Baird , Regulatory Head of Global Software Standards for Philips agrees that that are three different categories of trust :

>> The first was?technical trust, related to the data used to train the AI.

>> The second was?human trust, related to the usability of the system.

>> The third was?regulatory trust, relating to frameworks and standards, as well as the ethical, legal and social implications of AI.

?“The inconvenient truth” is that at present the algorithms that feature prominently in research literature are in fact not, for the most part, executable at the frontlines of clinical practice. This is for two reasons: first, these AI innovations by themselves do not re-engineer the incentives that support existing ways of working.2 ?A complex web of ingrained political and economic factors as well as the proximal influence of medical practice norms and commercial interests determine the way healthcare is delivered. Simply adding AI applications to a fragmented system will not create sustainable change. Second, most healthcare organizations lack the data infrastructure required to collect the data needed to optimally train algorithms to (a) “fit” the local population and/or the local practice patterns, a requirement prior to deployment that is rarely highlighted by current AI publications, and (b) interrogate them for bias to guarantee that the algorithms perform consistently across patient cohorts, especially those who may not have been adequately represented in the training cohort.

Despite seeking out automation solutions and getting a strategy in place, scaling and implementation remain to be challenges:

  • Only 7 percent of hospital's AI strategies are fully operational
  • Just 6 percent of respondents cited having 10 or more use cases live across their organization
  • 44 percent of respondents cited resource constraints (e.g. not enough staff to support implementation) and difficulty identifying best processes for automation as their top two implementation challenges

Even for the relatively-skilled job postings in hospitals, which includes doctors, nurses, medical technicians, research lab workers, and managers, only approximately 1 in 1,250 job postings required AI skills. This is lower than other skilled industries such as professional, scientific, or technical services, finance and insurance, and educational services.

To understand the kinds of complementary innovations that might lead to more adoption of AI in hospitals, it is useful to understand why hospitals might hesitate to adopt. Four important barriers to adoption are algorithmic limitations, data access limitations, regulatory barriers, and misaligned incentives.

The authors of this paper present a model for understanding the key drivers of clinical adoption of AI-DDS tools by health systems and providers alike, drawing from these historical examples and the current discourse around AI, as well as notable frameworks of human behavior . This model focuses on eight major determinants across four interrelated core domains, and the issues covered within each domain are as follows:

  • Domain 1:?Reason to use?explores the alignment of incentives, market forces, and reimbursement policies that drive health care investment in AI-DDS.
  • Domain 2:?Means to use?reviews the data and human infrastructure components as well as the requisite technical resources for deploying and maintaining these tools in a clinical environment.
  • Domain 3:?Method to use?discusses the workflow considerations and training requirements to support clinicians in using these tools.
  • Domain 4:?Desire to use?considers the psychological aspects of provider comfort with AI, such as the extent to which the tools alleviate clinician burnout, provide professional fulfillment, and engender overall trust. This section also examines medicolegal challenges, one of the biggest hurdles to fostering provider trust in and the adoption of AI-DDS.

No alt text provided for this image


Here are keys to AI implementation in your facility. Here are some tips on how to deploy and retire AI solutions.

?Implementing AI can introduce disruptive change and disfranchise staff and employees. When members are reluctant to adopt a new technology, they might hesitate to use it, push back against its deployment, or use it in limited capacity — which affects the benefits an organization gains from using it in the first place. Organizations often don’t see the problems coming, and rollout a new tool too quickly only for it to run into major barriers. To navigate this process, we propose a three-step approach: 1) assess the impact of an AI solution, 2) identify barriers to adoption, and 3) identify the appropriate pace.

The use of AI in primary health care may have a positive impact, but many factors need to be considered regarding its implementation. This study may help to inform the development and deployment of AI tools in primary health care.

No alt text provided for this image

A review of the use of AI in healthcare identified 9 domains and concluded that AI adoption in healthcare delivery has lagged behind adoption in other business sectors.

More than a year after the launch of ChatGPT, companies are still facing the same question when they first considered the technology: How do they actually go about putting it into business use? Many companies have simply discovered that generative AI tools like LLMs, while impressive, aren’t plug and play. Companies should consider a few suggestions when thinking of whether and how to onboard these tools: 1) choose performance over novelty, 2) combine GenAI with tools like vector databases, 3) never forget the human-in-the-loop, 4) trace your data, and 5) have realistic expectations

Only 25% of healthcare organizations have deployed generative AI solutions, but that is expected to more than double next year as executives see opportunities to automate clinical documentation and improve patient communication.

According to a new KLAS report, 58% of healthcare executives say their organization is likely to implement or purchase a solution within the next year. Larger organizations, particularly hospitals with more than 500 beds, are more inclined to invest in a solution than smaller organizations, according to a new?KLAS Research report .

Concerns linger over cost, reliability and security.

Artificial intelligence in medicine is advancing rapidly. However, for it to grow at scale and provide the promised value will depend on how quickly the barriers fall. Here are the essential steps to execute your AI strategy.

Arlen Meyers, MD, MBA is the President and CEO of the Society of Physician Entrepreneurs on Substack and Editor of Digital Health Entrepreneurship

Arlen Meyers, MD, MBA

President and CEO, Society of Physician Entrepreneurs, another lousy golfer, terrible cook, friction fixer

3 个月
回复
Arlen Meyers, MD, MBA

President and CEO, Society of Physician Entrepreneurs, another lousy golfer, terrible cook, friction fixer

8 个月
回复
John Soloninka

CTO at SpinaFX Medical

4 年

Arlen Meyers, MD, MBA - great article. As it was in the 1980s AI wave, many of the (legitimate) fears of autonomous/inexplicable AI/ML applications are due to the exaggerated expectations of the term Artificial Intelligence. It is SUCH a misnomer. Far less fear is associated with a term such "statistical model". In addition, with a less science fiction-y label, startups would not be overselling "AI" but would be touting better med devices or software performance based on more standard measures of value we all accept. ML, as well, is wrongly touted in popular press to assume that ML algorithms independently learn and change in real time...with attendant risk. This may be done in unregulated social media and advertising markets, but not so in medicine (yet). In practice, ML may be used to develop a solution thought "learning", but is then "locked down" for release and is not a loose cannon. The ethical frameworks you mention are highly appropriate for any technology or information. Think about insurance companies using genetic profiling to deny coverage on "risk" of a pre-existing condition...rather than having a better-designed all-cause risk sharing model for reasonable premiums. That is a legitimate misuse of genetic information, and would be well guided by the ethical frameworks you mention, but has nothing to do with AI. I will be writing more on this subject for startups shortly. Random fact...David Schatsky (Deloitte strategy lead author you cited) and I both worked for Symbolics, a Cambridge Mass AI company in 1980's. Haven't thought about him for 30 yrs....Very sharp guy!!

回复
Paul Robberson

Senior Business Consultant at Cleverity LLC

6 年

The Hype Cycle for AI in Health Care won't be the gentle wave depicted by Gartner, but rather a tsunami. Head to much higher ground.

Pierre-Alexandre Fournier

CEO at Hexoskin - Wearable Health Sensors, Clinical AI & Digital Biomarkers - Mayo Clinic Accelerator Alumni

6 年

Thank you Arlen for this very relevant post. Another fundamental technical barrier to medical machine learning is collecting (and labelling) the right data for the problem we're trying to solve. Most organizations are not ready for this (but many are!)

要查看或添加评论,请登录

社区洞察

其他会员也浏览了