Our Executive Director, Todd Quartiere attended the #HIMSS2025 conference last week where he was able to connect with other leaders in health tech and drive forward conversations regarding responsible AI for health and wellness, digital literacy and governance. ?As the overlap between multipurpose applications and the healthcare ecosystem expands, we're seeing medical information pushed through new channels (correct or incorrect), behavioral health provided in new ways (right or wrong), and human jobs disappearing to AI agents. Let's start a conversation where responsible AI use is the standard going forward and not the afterthought. #AIinHealthcare #ResponsibleAI #BHIL #DigitalHealth
After a day and a half at #HIMSS2025, it stands out to me that an AI literacy gap is painfully stalling opportunities for responsible AI to positively impact human health and wellbeing. Three key insights stood out: 1?? Health Systems struggle to find where to start: I spoke with two systems faced with analysis paralysis, not knowing what the right limited investments to make were. There are needs in the chain of events for care from referral management to patient follow-up, and it's a challenge to know where the value can be found today. My advice: start with what's proven. Provider documentation and administrative automation aren't glamorous, but they deliver immediate wins and build confidence for bigger leaps. 2?? Conversational AI for behavioral health is nuanced and complex. I had conversations that went deep on scenario planning for our Behavioral Health Intent Library (#BHIL), including one on patients with repeat verbal/written expressions to inflict self-harm with no true intentionality. Another spotlighted veterans, whose unique needs and resources aren't easily generalized. These scenarios may demand nuances for us to embrace and create decision trees with a library of resources to pull through. Something we at Foundation for AI & Health (FAIH) are actively seeking partners to do. On that note, we are seeking collaborators to navigate integrations across platforms like Google Vertex AI, #FHIR APIs for EHRs like MEDITECH and Epic, and in-points for platforms like Twilio's Studio Flow - just to name a few I spoke with today. If you're an industry partner with an aligned mission and an interest in creating an open-source library on how conversational AI should handle suicidal ideation or self-harm, let's connect. 3?? The AI whitewashing problem is getting worse. Much like HLTH Inc.'s 2024 conference, I saw the same trend with AI labeling everywhere, and an AI Pavilion that feels resembles a drop of white paint in a bucket of white paint. Flashy claims on using AI overshadow real-world impact, like better outcomes or improved efficiencies. Real achievements are what should be highlighted, with the use of AI simply being a tool to help us get there. Are you seeing similar trends? What stood out to you? #BehavioralHealth #ResponsibleAI #HealthcareIT #ConversationalAI #BHIL #FAIH