Foundation for AI & Health (FAIH)的封面图片
Foundation for AI & Health (FAIH)

Foundation for AI & Health (FAIH)

健康与公共事业

Durham,North Carolina 488 位关注者

Personalizing Health by Sparking Imagination and Exploration of Responsible AI for Healthcare

关于我们

Sparking Imagination for Responsible AI & Health, the Foundation for Artificial Intelligence & Health (FAIH) is a 501(c)(3) nonprofit on a mission to catalyze transformative health solutions by connecting diverse skillsets, competencies, and sectors. We host events, promote ideas, create content, and more to increase adoption and demystify AI for health.

网站
https://faih.org
所属行业
健康与公共事业
规模
1 人
总部
Durham,North Carolina
类型
非营利机构
创立
2023

地点

  • 主要

    110 Corcoran St

    5th Floor

    US,North Carolina,Durham,27701

    获取路线

Foundation for AI & Health (FAIH)员工

动态

  • Our Executive Director, Todd Quartiere attended the #HIMSS2025 conference last week where he was able to connect with other leaders in health tech and drive forward conversations regarding responsible AI for health and wellness, digital literacy and governance. ?As the overlap between multipurpose applications and the healthcare ecosystem expands, we're seeing medical information pushed through new channels (correct or incorrect), behavioral health provided in new ways (right or wrong), and human jobs disappearing to AI agents. Let's start a conversation where responsible AI use is the standard going forward and not the afterthought. #AIinHealthcare #ResponsibleAI #BHIL #DigitalHealth

    查看Todd Quartiere的档案

    Sparking Imagination for Responsible AI & Health

    After a day and a half at #HIMSS2025, it stands out to me that an AI literacy gap is painfully stalling opportunities for responsible AI to positively impact human health and wellbeing. Three key insights stood out: 1?? Health Systems struggle to find where to start: I spoke with two systems faced with analysis paralysis, not knowing what the right limited investments to make were. There are needs in the chain of events for care from referral management to patient follow-up, and it's a challenge to know where the value can be found today. My advice: start with what's proven. Provider documentation and administrative automation aren't glamorous, but they deliver immediate wins and build confidence for bigger leaps. 2?? Conversational AI for behavioral health is nuanced and complex. I had conversations that went deep on scenario planning for our Behavioral Health Intent Library (#BHIL), including one on patients with repeat verbal/written expressions to inflict self-harm with no true intentionality. Another spotlighted veterans, whose unique needs and resources aren't easily generalized. These scenarios may demand nuances for us to embrace and create decision trees with a library of resources to pull through. Something we at Foundation for AI & Health (FAIH) are actively seeking partners to do. On that note, we are seeking collaborators to navigate integrations across platforms like Google Vertex AI, #FHIR APIs for EHRs like MEDITECH and Epic, and in-points for platforms like Twilio's Studio Flow - just to name a few I spoke with today. If you're an industry partner with an aligned mission and an interest in creating an open-source library on how conversational AI should handle suicidal ideation or self-harm, let's connect. 3?? The AI whitewashing problem is getting worse. Much like HLTH Inc.'s 2024 conference, I saw the same trend with AI labeling everywhere, and an AI Pavilion that feels resembles a drop of white paint in a bucket of white paint. Flashy claims on using AI overshadow real-world impact, like better outcomes or improved efficiencies. Real achievements are what should be highlighted, with the use of AI simply being a tool to help us get there. Are you seeing similar trends? What stood out to you? #BehavioralHealth #ResponsibleAI #HealthcareIT #ConversationalAI #BHIL #FAIH

  • The Sustainable Healthcare with Digital Health Data Competence (#SUSA) Project, led by Minna Isomursu and backed by 12 European universities, is paving the way for a digitally skilled healthcare workforce across the European Union. With AI-driven tools transforming behavioral health, it's crucial to ensure standardized, evidence-based approaches to mental health interventions. This is where the Behavioral Health Intention Library (#BHIL) can make a powerful impact! By integrating BHIL's operational workflows and AI-driven decision-making frameworks, the SUSA project could: ? Provide structured, ethical AI applications in behavioral health care ? Ensure consistent, scalable training resources for digital health professionals ? Support responsible AI adoption to enhance mental health interventions across Europe The #SUSAProject & #BHIL can empower healthcare professionals with the competencies needed to integrate AI safely and effectively, ensuring better mental health outcomes for all. #DigitalHealth #AI #HealthcareInnovation #MentalHealth #DigitalWorkforce #FutureofHealthcare #MentalHealthMatters #AIForGood

  • ?? Advancing Adolescent Mental Health with AI & Collaboration ?? Researchers at Duke University Health System have developed an AI model that predicts adolescent mental health risks by analyzing key factors like sleep disturbances and family conflict. This groundbreaking innovation enables early interventions, offering hope for?proactive mental health care. ? The Behavioral Health Intention Library (#BHIL) could take this to the next level! By integrating BHIL's standardized, evidence-based workflows, this AI model could ensure that predictive insights seamlessly translate into safe, effective, and scalable interventions across diverse care settings. ? Why A Partnership Matters: ? Enhances AI-driven early detection with clinically validated response frameworks ? Bridges the gap between predictive analytics and real-world implementation ? Expands access to ethical, responsible AI solutions for adolescent mental health ? Together, we can create future where AI empowers mental health professionals and ensures young people receive the care they need—when they need it most. #MentalHealth #ResponsibleAI #MentalHealthMatters #YouthWellness #HealthcareInnovation #AIforGood

  • As AI becomes a bigger part of mental health care, ensuring it aligns with ethical and clinical best practices is more essential than ever, including in?professional settings. ? CCLA Investment Management is leading the way in holding companies accountable for workplace mental health through its Mental Health Benchmark—pushing organizations to prioritize employee well-being and responsible mental health strategies. But as AI-driven mental health tools become more common, we must also ensure these technologies are safe, ethical, and effective. ? That’s where the Behavioral Health Intention Library (#BHIL) comes in. By developing consistent, research-backed protocols, BHIL can help organizations meet CCLA Investment Management mental health standards by: ?? Creating AI-driven mental health workflows that align with clinical best practices ?? Providing ethical guidelines to ensure AI tools support users safely and effectively ?? Helping companies develop responsible mental health AI strategies that prioritize well-being ? We want to set a new standard for responsible AI in mental health, ensuring businesses don’t just talk about mental health—but take action with safe, standardized AI solutions. ? #ResponsibleAI #BehavioralHealth #MentalHealthMatters #AIforGood #WorkplaceWellness #MentalHealthBenchmark

    • 该图片无替代文字
  • Suicide is a leading public health crisis, and the numbers speak for themselves. According to the National Institute of Mental Health (NIMH), nearly 50,000 people die by suicide in the U.S. each year, and millions more attempt suicide and/or struggle with suicidal thoughts. As more individuals turn to AI-powered mental health tools for support, ensuring these systems respond safely, ethically, and effectively is critical. Right now, there’s no universal standard guiding AI-driven mental health interventions—putting lives at risk. That’s where the Behavioral Health Intention Library (#BHIL) comes in. By developing consistent, research-backed protocols, BHIL can help organizations improve AI-driven suicide risk detection and response. Together, we can bridge the gap between AI innovation and responsible mental health care. We want to collaborate with industry leaders, researchers, and policymakers to join us in shaping BHIL and making suicide prevention a priority in the digital age. #SuicidePrevention #MentalHealthMatters #ResponsibleAI #BehavioralHealth #BHIL #AIforGood #PublicHealth

    • 该图片无替代文字
  • As AI plays a larger role in mental health care, ensuring safety and consistency in high-risk situations- like suicidal ideation- is more important than ever. ? Talkspace has made therapy more accessible through digital platforms, using AI to support clients and match them with licensed therapists. But as AI-assisted mental health tools continue to evolve, clear, evidence-based guidelines are crucial to ensuring safe and effective interventions. ? That’s where the Behavioral Health Intention Library (#BHIL) comes in. By developing open-source, evidence-based workflows and best practices, BHIL can help and support companies like Talkspace. ? BHIL has the potential to drive the next wave of safe, AI-assisted mental health solutions, improving outcomes for those who need support the most. We are looking to collaborate with organizations like Talkspace to join us in developing BHIL and shaping the future of safe, AI-supported mental health services, improving outcomes for those seeking care. ? Let’s work to set a new standard for responsible AI in mental health. #ResponsibleAI #BehavioralHealth #DigitalHealth #AIforGood #BHIL #MentalHealthTech

    • 该图片无替代文字
  • AI is becoming a critical tool in mental health support, but without standardized guidelines, responses to high-risk situations—like self-harm and suicidal ideation—can be inconsistent and unsafe. Organizations like The Trevor Project are at the forefront of crisis intervention for LGBTQ+ youth, providing lifesaving support when it’s needed most. As AI plays a growing role in digital mental health, ensuring responsible, clinically backed workflows is essential. That’s where the Behavioral Health Intention Library (#BHIL) comes in. By developing open-source, evidence-based workflows, BHIL can help organizations like The Trevor Project: ? Enhance AI-driven crisis response with a library of resources ? Ensure AI-powered tools align with clinical and ethical guidelines We’re looking to collaborate with crisis response organizations—like The Trevor Project —who share our commitment to responsible AI in behavioral health. Let’s work together to shape the future of AI-driven mental health care. #MentalHealthTech #ResponsibleAI #BehavioralHealth #AIforGood #BHIL #DigitalHealth

    • 该图片无替代文字
  • AI-enabled applications are being rapidly commercialized. Applications like Wysa use conversational AI to provide mental health counsel, offering users immediate, accessible care. ? Apps like Wysa- not Wysa specifically - underscore the urgent need for standardized, evidence-based workflows to ensure AI-driven interventions are safe, effective, and aligned with clinical best practices. Our Behavioral Health Intention Library (#BHIL) is tackling this challenge by developing open-source workflows and resources to guide responsible AI adoption. We need industry participants - like Wysa - who share this vision and want to contribute to developing V1.0 of our library. ? Be part of the movement transforming behavioral health. ? #ResponsibleAI #BehavioralHealth #MentalHealthTech #DigitalHealth

    • 该图片无替代文字
  • The recent FDA approval of Rejoyn from Otsuka Pharmaceutical Companies (U.S.), a prescription-only digital therapeutic smartphone app for treating major depressive disorder, marks a significant advancement in digital mental health solutions. As more individuals turn to conversational AI for support, it's crucial to share learnings and best practices that lead to safer workflows, especially for high-risk interactions, like those toward those contemplating suicide or self-harm. Our Behavioral Health Intention Library (#BHIL) aims to provide this by offering open-source workflows and resources. We invite industry participants, like Otsuka, who share our commitment to responsible AI in behavioral health, to collaborate on developing V1.0 of our library. Together, we can shape the future of mental health support with effective, standardized practices.

  • Children are turning to AI for mental health - and developers have little to no guidance outside their organization on how to design workflows for dangerous intentions like suicidal ideation or self harm. Companies like Troomi have apps designed as mental health coaches to offer safer, more supportive environments. This is not a review / comment on Troomi - but the designs highlight a need for safe practices stemming from uniform professional guidance. Our Behavioral Health Intention Library (#BHIL) aims to provide that with open-source workflows and resources. We need industry participants - like Troomi - who agree and are interested in participating in developing V1.0 of our library. Join us in shaping the future of behavioral health with responsible AI. #BehavioralHealth #AIinHealthcare #EthicalAI #BHIL #MentalHealthInnovation

    • 该图片无替代文字

相似主页

查看职位