End-of-life decisions are difficult and distressing. Could AI help?
MIT Technology Review
Our in-depth reporting on innovation reveals and explains what’s happening now to help you know what’s coming next.
End-of-life decisions can be extremely upsetting for surrogates—the people who have to make challenging medical choices
Better understand the biggest stories in health, medical science, and biotech with our free weekly newsletter, The Checkup. Sign up today to stay informed.?
Ethicists say a “digital psychological twin” could help doctors and family members make decisions for people who can’t speak themselves.
A few months ago, a woman in her mid-50s—let’s call her Sophie—experienced a hemorrhagic stroke. Her brain started to bleed. She underwent brain surgery, but her heart stopped beating.
Sophie’s ordeal left her with significant brain damage. She was unresponsive; she couldn’t squeeze her fingers or open her eyes when asked, and she didn’t flinch when her skin was pinched. She needed a tracheostomy tube in her neck to breathe and a feeding tube to deliver nutrition directly to her stomach, because she couldn’t swallow. Where should her medical care go from there?
This difficult question was left, as it usually is in these kinds of situations, to Sophie’s family members. But the family couldn’t agree. Sophie’s daughter was adamant that her mother would want to stop having medical treatments and be left to die in peace. Another family member vehemently disagreed and insisted that Sophie was “a fighter.” The situation was distressing for everyone involved, including Sophie’s doctors.
领英推荐
End-of-life decisions can be extremely upsetting. David Wendler, a bioethicist at the US National Institutes of Health, and his colleagues have been working on an idea for something that could make things easier: an artificial-intelligence-based tool that can help surrogates predict what patients themselves would want in any given situation.
The tool hasn’t been built yet. But Wendler plans to train it on a person’s own medical data, personal messages, and social media posts. He hopes it could not only be more accurate at working out what the patient would want
Wendler, along with bioethicist Brian Earp at the University of Oxford and their colleagues, hopes to start building the tool as soon as they secure funding
Get ahead with these related stories:
Image: Stephanie Arnett / MIT Technology Review | Getty Images
--
6 个月Curious, has anyone considered how the "organoid" or "brainoid" chips could be implemented in thus technology. I apologize if I'm going to deep here, but I do understand how fast technology is achieving great leaps.
AI Solutions Architect | Generative AI | NLP Certification, Master's Degree | Taught 3000+ ML Students Globally | Digital Marketing | RLHF Expert
7 个月Thanks for sharing and I appreciate your contribution to healthcare AI community. The domain-specific fields such as healthcare needs accuracy-intensive AI models. I strongly recommend Aimped AI healthcare AI models which are small, lightweight, powerful, accurate. They are available here (https://aimped.ai/models) and accessible for everyone freely: De-identification, Medical Coding extraction (ICD10, RxNORM, LOINC etc), Relation Extraction, Adverse Drug Effects extraction, Bio-Medical Text Translation and much more. Here is the paper that shows how our medical translation models outperform Google Translate, DeepL or GPT-4: https://arxiv.org/pdf/2407.12126
Corporate/personal Trainer/coach: Leiderschap met Emotionele Intelligentie, en Effective Communicatie (Harvard-online getrained)
7 个月A slippery slope. Especially when financial considerations start playing a role. Lets hope insurance companies won't have a say in the criteria set the AI is trained on.
林崎経営管理事務所 - 代表
7 个月Just one simple question! Does AI die?
--
7 个月AI as an accredited weapon in health and social "care"= bad idea