We are very excited to be joined by some of the leading organizations implementing and evaluating AI at our AI Pavilion Booth at HLTH. Please come by and see how Mercy, Mayo Clinic, EBSCO Health Care, University of Maryland Medical System, and Nao Innovation Lab have started applying our Assurance Standards Guide and Reporting Checklists to help with responsible AI creation and adoption. Find the documents here: https://lnkd.in/gu_R6Wkv We will also be showcasing for the first time the concept of quality assurance labs with Dandelion Health, Mayo Clinic Platform, Qualified Health, Gesund.ai, and BeeKeeperAI. Our team will also be there to field any membership questions. Looking forward to seeing you at #HLTHUSA! Byron Yount, PhD, Elliott Green, Hailey H., Justin Norden, MD, MBA, MPhil, Brenna Loufek, Lauren Rost, PhD, Katherine Eisenberg, MD, PhD, FAAFP, Marco Smit, Enes Ho?g?r, Ph.D., Arihant Jain, Mary Beth Chalk, Michael Blum, MD, Warren D'Souza #responsibleAI #HealthCommunity #CHAIGlobalSummit24
Coalition for Health AI (CHAI)的动态
最相关的动态
-
“Existing evaluations of LLMs mostly focus on accuracy of question answering for medical examinations, without consideration of real patient care data.” We are proud to share new research in JAMA by multiple CHAI members and our very own Head of Policy Lucy Orr-Ewing. Their systematic review shines light on the current state of large language model (LLM) evaluations in healthcare. This research emphasizes the need for certification frameworks that mitigate potential harm from algorithms to marginalized communities. At CHAI, we're dedicated to addressing these gaps and developing robust guidelines for responsible AI implementation in health. Thank you to Suhana Bedi, Yutong Liu, Dev Dash, Sanmi Koyejo, Alison Callahan, Jason Fries, Michael Wornow, Akshay Swaminathan, Lisa Lehmann, Mehr Kashyap, Akash Chaurasia, Nirav R. Shah, Karandeep Singh, Troy Tazbaz, Arnold Milstein, Michael Pfeffer, H. Christy Hong, MD MBA, Nigam Shah, for your contributions.
Our paper got published in JAMA! ?? Earlier this year, Suhana Bedi Yutong Liu and I led a paper at Stanford University School of Medicine that highlights critical gaps in evaluating Large Language Models (LLMs) in healthcare. We categorized all 519 relevant studies from 1 Jan 2022 to 19 Feb 2024 into (1) evaluation data type, (2) health care task, (3) natural language processing (NLP) and natural language understanding (NLU) tasks, (4) dimension of evaluation, and (5) medical specialty. In doing so, we revealed: - Only 5% used real patient care data in their testing and evaluation. - Key tasks like prescription writing and clinical summarization are underexplored. - The focus on accuracy dominates, while vital aspects like fairness, bias, and toxicity remain largely neglected. - Only 1 study assessed the financial impact of LLMs in healthcare. Why does this matter? - Real patient care data encompasses the complexities of clinical practice, and so a thorough evaluation of LLM performance should mirror clinical performance as closely as possible to truly determine its effectiveness. - There are many high-value administrative tasks in health care that are often labor intensive, requiring manual input and contributing to physician burnout, that are currently chronically understudied. - Only 15.8% of studies conducted any evaluation that delves into how factors such as race and ethnicity, gender, or age affect bias in the model’s output. Future research should place greater emphasis?on fairness, bias or toxicity evaluations if we want to stop LLMs from perpetuating bias. - Future evaluations must estimate total implementation costs, including model operation, monitoring, maintenance, and infrastructure adjustments, before reallocating resources from other health care initiatives. The paper calls for standardized evaluation metrics, broader coverage of healthcare applications, and real patient care data to ensure safe and equitable AI integration. This is essential for the responsible adoption of LLMs in healthcare to truly improve patient care. And I am delighted that I get to work on implementing the findings of this research at Coalition for Health AI (CHAI). This paper could not have happened without Nigam Shah's constant support, leadership and guidance, and that of our co-authors Dev Dash Sanmi Koyejo Alison Callahan Jason Fries Michael Wornow Akshay Swaminathan Lisa Lehmann H. Christy Hong, MD MBA Mehr Kashyap Akash Chaurasia Nirav R. Shah Karandeep Singh Troy Tazbaz Arnold Milstein Michael Pfeffer. Thank you also to Nicholas Chedid, MD, MBA Brian Anderson, MD and Justin Norden, MD, MBA, MPhil for your guidance and mentorship. And of course, huge shout out to my co-conspirators Yutong Liu Suhana Bedi - you are the best team. This is the first paper I've ever written, and I'm eternally grateful to you all for showing me how it's done. Full article here: https://lnkd.in/eimh9BNV
要查看或添加评论,请登录
-
-
Read the latest in our member Q&A series with insights from Sam Warmuth, Chief Product Officer at Healthvana. Healthvana builds solutions to help patients with their health journey, with customers ranging from the County of Los Angeles to the largest HIV and sexual health organization in the country. Sam shared more about their work and talked about his experience?in the Gen AI workgroup. Read the full Q&A here: https://lnkd.in/ebwaXwX2 #CHAI #AI #HealthcareOnLinkedIn #CHAI24
要查看或添加评论,请登录
-
Read our recent Q&A with insights from Byron Yount, PhD, Chief Data and AI Officer at Mercy, one of our valued CHAI members. Dr. Yount and his team work at the cutting edge of AI in healthcare delivery. Mercy currently has 18 homegrown AI products, plus others, fully scaled across the communities in which they serve to empower providers and patients. Read the full Q&A here: https://lnkd.in/dTP3DtUh #CHAI #AI #HealthcareOnLinkedIn #CHAI24
要查看或添加评论,请登录
-
Read our recent Q&A with insights from Raj Ratwani, VP of Scientific Affairs at MedStar Health, one of our dedicated CHAI members. Dr. Ratwani's work is at the forefront of integrating AI into health, focusing on patient safety and health equity, underscoring the critical need to balance technological advancement with ethical considerations in healthcare innovation Read the full Q&A here: https://lnkd.in/ev7_z9_d #CHAI #AI #HealthcareOnLinkedIn?
Q&A Spotlight: Raj Ratwani, Vice President of Scientific Affairs at MedStar Health and Co-lead of the Predictive AI Working Group - CHAI - Coalition for Health AI
chai.org
要查看或添加评论,请登录
-
We are thrilled to announce that Booz Allen Hamilton has joined the Coalition for Health AI! We are excited to work together to promote the ethical application of AI in health. https://lnkd.in/eVafNZAE #AI #CHAI #HealthcareOnLinkedIn
Booz Allen Joins the Coalition for Health AI
boozallen.com
要查看或添加评论,请登录
-
Our CEO Brian Anderson, MD, recently spoke at the Alliance for Health Policy’s Congressional briefing on the responsible and safe development and use of AI in healthcare. This timely event brought together industry and policy leaders to discuss approaches to shape the future of #AI in this industry. Thank you to Fierce Healthcare’s Emma Beavins for moderating a productive conversation. Brian shared early insights into CHAI’s Assurance Standards Guide and certification frameworks that will serve as a playbook for leaders integrating emerging technologies across their organizations, as well as the philosophy, values, and strengths of this approach. #HealthcareOnLinkedIn #ArtificialIntelligence
要查看或添加评论,请登录
-
-
Interesting article from HealthLeaders’ Eric Wicklund about the journey towards responsible #HealthAI. Thanks to Ravi B. Parikh, assistant professor of medicine and health policy at the University of Pennsylvania - a CHAI member organization - for sharing his insight into the need for private sector organizations like CHAI, TRAIN, and DiME to help create standards for AI certification and governance that can evolve with the technology over time. Read the full article here:? https://lnkd.in/eHgqT_k5 #AI #HealthcareOnLinkedIn
As AI Use Cases Grow in Healthcare, Executives Scramble to Grab the Reins
healthleadersmedia.com
要查看或添加评论,请登录
-
"Every sector of consequence in the U.S. economy has the ability to have independent entities that evaluate things for safety and effectiveness. We don't have that in health AI. And that's a huge problem." Thanks to Newsweek for shining a spotlight on?this important moment in our industry. Our CEO, Brian Anderson, MD, spoke with reporter Alexis Kayser about CHAI and the dangers of rapid, unchecked development and deployment of AI in the absence of consensus-driven assurance and certification. As AI continues to rapidly transform industries, healthcare sits at a critical juncture. In collaboration with our members, we are working to create a consensus agreement on what good, responsible AI looks like in health at a technical level, to ensure the ethical and efficacious use of these powerful emerging technologies in healthcare. Read the full story here: https://lnkd.in/dGd9resG #AI #CHAI #HealthcareOnLinkedin
AI is the only unchecked US "sector of consequence," says health care exec
newsweek.com
要查看或添加评论,请登录