End-of-life decisions are difficult and distressing. Could AI help?
Stephanie Arnett / MIT Technology Review | Getty Images

End-of-life decisions are difficult and distressing. Could AI help?

End-of-life decisions can be extremely upsetting for surrogates—the people who have to make challenging medical choices on behalf their loved ones. Friends or family members may disagree over what’s best, which can lead to distressing situations. What if we could make these decisions easier? In this edition of What’s Next in Tech, learn about a potential AI-based tool that can help surrogates predict what the patients themselves would want.

Better understand the biggest stories in health, medical science, and biotech with our free weekly newsletter, The Checkup. Sign up today to stay informed.?

Ethicists say a “digital psychological twin” could help doctors and family members make decisions for people who can’t speak themselves.

A few months ago, a woman in her mid-50s—let’s call her Sophie—experienced a hemorrhagic stroke. Her brain started to bleed. She underwent brain surgery, but her heart stopped beating.

Sophie’s ordeal left her with significant brain damage. She was unresponsive; she couldn’t squeeze her fingers or open her eyes when asked, and she didn’t flinch when her skin was pinched. She needed a tracheostomy tube in her neck to breathe and a feeding tube to deliver nutrition directly to her stomach, because she couldn’t swallow. Where should her medical care go from there?

This difficult question was left, as it usually is in these kinds of situations, to Sophie’s family members. But the family couldn’t agree. Sophie’s daughter was adamant that her mother would want to stop having medical treatments and be left to die in peace. Another family member vehemently disagreed and insisted that Sophie was “a fighter.” The situation was distressing for everyone involved, including Sophie’s doctors.

End-of-life decisions can be extremely upsetting. David Wendler, a bioethicist at the US National Institutes of Health, and his colleagues have been working on an idea for something that could make things easier: an artificial-intelligence-based tool that can help surrogates predict what patients themselves would want in any given situation.

The tool hasn’t been built yet. But Wendler plans to train it on a person’s own medical data, personal messages, and social media posts. He hopes it could not only be more accurate at working out what the patient would want, but also alleviate the stress and emotional burden of difficult decision-making for family members.

Wendler, along with bioethicist Brian Earp at the University of Oxford and their colleagues, hopes to start building the tool as soon as they secure funding for it, potentially in the coming months. But rolling it out won’t be simple. Critics wonder how such a tool can ethically be trained on a person’s data, and whether life-or-death decisions should ever be entrusted to AI. Read the story.

Get ahead with these related stories:

  1. The messy quest to replace drugs with electricity “Electroceuticals” promised the post-pharma future for medicine. But the exclusive focus on the nervous system is seeming less and less warranted.
  2. Controversial CRISPR scientist promises “no more gene-edited babies” until society comes around In a public interview, Chinese biophysicist He Jiankui said he is receiving offers of financial support from figures in the US.
  3. This grim but revolutionary DNA technology is changing how we respond to mass disastersAfter hundreds went missing in Maui’s deadly fires, rapid DNA analysis helped identify victims within just a few hours and bring families some closure more quickly than ever before. But it also previews a dark future marked by increasingly frequent catastrophic events.

Image: Stephanie Arnett / MIT Technology Review | Getty Images

Curious, has anyone considered how the "organoid" or "brainoid" chips could be implemented in thus technology. I apologize if I'm going to deep here, but I do understand how fast technology is achieving great leaps.

回复
Murat G.

AI Solutions Architect | Generative AI | NLP Certification, Master's Degree | Taught 3000+ ML Students Globally | Digital Marketing | RLHF Expert

7 个月

Thanks for sharing and I appreciate your contribution to healthcare AI community. The domain-specific fields such as healthcare needs accuracy-intensive AI models. I strongly recommend Aimped AI healthcare AI models which are small, lightweight, powerful, accurate. They are available here (https://aimped.ai/models) and accessible for everyone freely: De-identification, Medical Coding extraction (ICD10, RxNORM, LOINC etc), Relation Extraction, Adverse Drug Effects extraction, Bio-Medical Text Translation and much more. Here is the paper that shows how our medical translation models outperform Google Translate, DeepL or GPT-4: https://arxiv.org/pdf/2407.12126

回复
Steven Claus

Corporate/personal Trainer/coach: Leiderschap met Emotionele Intelligentie, en Effective Communicatie (Harvard-online getrained)

7 个月

A slippery slope. Especially when financial considerations start playing a role. Lets hope insurance companies won't have a say in the criteria set the AI is trained on.

林崎代表

林崎経営管理事務所 - 代表

7 个月

Just one simple question! Does AI die?

回复

AI as an accredited weapon in health and social "care"= bad idea

回复

要查看或添加评论,请登录

MIT Technology Review的更多文章

社区洞察

其他会员也浏览了