Digital Health Implementation: Generative AI in the Mix

Digital Health Implementation: Generative AI in the Mix

Ever since that fateful day when ChatGPT exploded onto the global scene (30th November, 2022 to be exact) the entire conversation on digital health indelibly changed. ?All of a sudden, there was a pressing need to, a) understand generative AI and the implications of this new technology and b) understand if and how this was going to be safely implemented into healthcare systems in the foreseeable future.(1) There is no better way to understand the year-to-year current state of play in this area than by attending Melbourne’s truly excellent Digital Health Festival where both healthcare and technology leaders from across Australia and beyond collaborate to share learnings from implementation projects, as well as current and future technologies.

It goes without saying that there’s clear acceptance across the board that new technological solutions are required to improve present systems and optimise patient outcomes. However, implementing any tech solution into a healthcare space requires very high levels of stakeholder engagement through wholesale organisational buy in, co-design with all staff especially frontline staff, resourcing, the need to address all aspects of the quintuple aims of healthcare (2) (Figure 1) and also strive to add to the cannon of shareable knowledge ideally through a peer-reviewed process.(3)

?

Figure 1. Quintuple Aims of Healthcare

?

So, what were some key insights from this year’s conference?

Operational optimisation continues apace:

We heard in detail about successful operational optimisation projects including using enterprise software to reduce patient call waiting times (St Vincent’s), enlisting robotic process automation for complex mass healthcare data migrations (Western Health) and implementation of an EMR at the Royal Flying Doctors Service which involved an incredible amount of feedback and co-design with all staff in the process.

Take Home Message: Operational optimisation using digital health technologies is already par for the course in most health services. Some health services are actually ahead in building data analytic dashboards to improve accountability and transparency across their services.


Virtual ED is already out of the box?

AI assisted virtual care pathways that triage potential ED patients is already in full swing triaging thousands of patients per day via online symptom checking, chat bots, automated and person-assisted telephone lines (Healthdirect’s Symptom Checker/nurse-on-call helpline/chatbot and Northern Health’s Victorian Virtual ED department).

In Australia, there have been 15 million uses of Healthdirect’s AI assisted digital triage offerings, and 1.3 million uses of the Nurse-on-call hotline. Stunningly, of those triaged 75% were shifted to self-care or primary care services with minimal risk to patient safety. Separately, one recently published Australian study found virtual health screening reduced physical presentations to ED by 70%, with acceptable accuracy/safety and high acceptance by HCPs.(4)

Take Home Message: 'Digital front doors’ are already making a big impact on ED’s by reducing unnecessary visits, and as they are already showing high rates of adoption they will likely keep getting better.

?

There is still much work to be done on AI implementation (particularly regarding generative AI solutions)

AI technologies are expanding rapidly and with it the promise for pivotal improvements within healthcare if used wisely. ?The Australian Alliance for AI in Healthcare in liaison with partners have produced an updated Roadmap last year which help clarify thinking on what safe adoption of AI in Australian healthcare looks like.(5) The biggest issues we have yet to address are:

  • Frameworks and regulations that are still WIPs. The EU AI Act and the recent White House executive order on safe and trustworthy AI may help the continued efforts to shape a global scaled approach to managing risk in AI.
  • Algorithmic and data sovereignty. For instance, supporting local vendors with white box solutions and localised data storage.
  • The building of an evidence-base to support evidence-based adoption. There are only 84 published RCTs in clinical AI trials globally and none of these were conducted in Australia.
  • Formalised independent groups would be needed to continuously evaluate and measure safety in LLM technologies and their use. For instance, a recent study has found that use of generative AI to create communication drafts to patients significantly changed physicians’ responses compared to if they had written manual responses, mostly by reducing urgency of care recommendations. ?There would be clear risks to patient safety if physicians began to use the LLMs assessments instead of using the responses to facilitate the communication of their own assessments.(6)
  • The training of healthcare workforce en masse on AI

Take Home Message: Despite the many applications that AI already serves to assist HCPs (especially in the predictive image analysis domain) the use of its application in complex language predictive outputs (ie. LLMs) in facilitating HCPs in patient communication, diagnosis and management workflows is under-investigated particularly in regard to patient safety. Early evidence shows that this could be an area of risk to patient safety. Clear universal guardrails and fit-for-purpose regulations would improve downstream acceptance and adoption in these types of applications.

?

Creating a Learning Health System is also some way off (but we’re working towards it)

The North Star is ultimately working towards what Horwitz et al (2019) characterises as a ‘learning health system’.(3) A learning health system is characterised by “continual improvement and innovation” with “new knowledge captured as an integral by- product of the delivery experience.” This is a health service that formally evaluates system changes through randomised quality improvement projects. However, there are no easy ways to publish or share these findings as they often do not qualify as ‘research’ projects and more often fall under quality improvement projects. A secure environment/repository that could enable published knowledge sharing between services could be of great benefit.

Take Home Message: It’s easy to have ideas and in some cases readily implement them, however it’s difficult to evaluate them rigorously in the real world and share the results.(3) With knowledge sharing and sophisticated communication technologies readily available, aggregating and sharing key operational learnings could be a powerful step forward to en masse service improvement.

In the absence of services that are simply unable to publish key learnings in peer-reviewed journals, the DHF serves a very importance purpose in the current climate of huge technological upheaval within the healthcare sector. It serves to build bridges and layers of understanding in a very complex and fast-moving space so that we can all learn a thing or two about successful strategies to move forward in health services as well as insights into what the future of healthcare looks like. Congratulations and thank you to the founders and organisers who have managed to evolve and improve this event every year!

?


References

1.??? Harrer S. Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine. EBioMedicine. 2023 Apr;90:104512. doi: 10.1016/j.ebiom.2023.104512. Epub 2023 Mar 15. PMID: 36924620; PMCID: PMC10025985.

2.??? Itchhaporia D. The Evolution of the Quintuple Aim: Health Equity, Health Outcomes, and the Economy. J Am Coll Cardiol. 2021 Nov 30;78(22):2262-2264. doi: 10.1016/j.jacc.2021.10.018. PMID: 34823665; PMCID: PMC8608191.

3.??? Horwitz LI, Kuznetsova M, Jones SA. Creating a Learning Health System through Rapid-Cycle, Randomized Testing. N Engl J Med. 2019 Sep 19;381(12):1175-1179. doi: 10.1056/NEJMsb1900856. PMID: 31532967.

4.??? Kelly JT, Mitchell N, Campbell KL, Furlong K, Langley M, Clark S, Rushbrook E, Hansen K. Implementing a virtual emergency department to avoid unnecessary emergency department presentations. Emerg Med Australas. 2024 Feb;36(1):125-132. doi: 10.1111/1742-6723.14328. Epub 2023 Nov 8. PMID: 37941299.

5.??? https://aihealthalliance.org/wp-content/uploads/2023/11/AAAiH_NationalPolicyRoadmap_FINAL.pdf

6. Chen S, Guevara M, Moningi S, Hoebers F, Elhalawani H, Kann BH, Chipidza FE, Leeman J, Aerts HJWL, Miller T, Savova GK, Gallifant J, Celi LA, Mak RH, Lustberg M, Afshar M, Bitterman DS. The effect of using a large language model to respond to patient messages. Lancet Digit Health. 2024 Apr 24:S2589-7500(24)00060-8. doi: 10.1016/S2589-7500(24)00060-8. Epub a

Philip Anderton

Retired Academic Clinical Vision Scientist and Optometrist

10 个月

It’s ARTIFICIAL intelligence. It’s a machine.

回复

要查看或添加评论,请登录

Dr Michelle Waugh的更多文章

社区洞察

其他会员也浏览了