?? ???????????? ???????????????? ???? ?????????????? ???????????? ??????????????????'?? ?????????????? ?????? ???????????????????? ???? ?? A recent report by Paragon Health Institute, featured in Fierce Healthcare, outlines a roadmap for the incoming administration's approach to healthcare AI policy. Kev Coleman’s recommendations, particularly around allowing AI solutions to operate without a human in the loop to save costs and promoting competition through AI-enabled devices, are bold and thought-provoking. These ideas bring up several important themes for the future of regulation: 1?? ?????????????????? ???????????????? ??????????????????????: Coleman advocates leveraging existing rules for AI safety, fairness, and privacy, avoiding redundant or duplicative laws. This pragmatic approach mirrors insights from a recent paper by Lee Fleisher and Nicoleta Economou in JAMA Health Forum, which explored the possibility of Centers for Medicare & Medicaid Services regulating AI through established patient safety procedures. 2?? ???????????????? ????????????????????: AI is not a monolithic technology. Coleman highlights the importance of regulating specific AI types, such as large language models and computer vision, differently, ensuring precise definitions to avoid unintended consequences or unenforceable policies. 3?? ???? ???????????????? ???? ??????-???????? ?????? ??????????: Granting AI more autonomy in low-risk areas holds potential for productivity gains and cost savings. However, maintaining trust and safety requires continuous, localized monitoring to ensure these systems remain accurate and non-inferior to human alternatives. As healthcare AI evolves and the new administration’s approach to regulation unfolds, these considerations will be key in balancing innovation and oversight. Original article: https://lnkd.in/e8ST-TZH #HealthAI #Regulation #Compliance #Governance
CHARGE - Center for Health AI Regulation, Governance & Ethics
健康与公共事业
Boston,MA 649 位关注者
Exploring health AI regulation, governance, ethics, compliance & safety standards
关于我们
CHARGE is a community dedicated to fostering meaningful discussions on health AI regulation, governance, ethics, compliance & safety. We bring together healthcare stakeholders — including policymakers, compliance and ethics leaders, clinicians, data professionals, and AI vendors — to collaboratively explore the evolving challenges and opportunities in health AI. Through shared insights and expertise, CHARGE aims to shape a responsible, transparent, and ethical future for AI in healthcare.
- 网站
-
chargeai.org
CHARGE - Center for Health AI Regulation, Governance & Ethics的外部链接
- 所属行业
- 健康与公共事业
- 规模
- 2-10 人
- 总部
- Boston,MA
- 类型
- 教育机构
- 创立
- 2024
地点
-
主要
US,MA,Boston
CHARGE - Center for Health AI Regulation, Governance & Ethics员工
动态
-
?? ?????????????????? ???? ???????????????????? ???? ???? ????????????????????: ?????? ?????????????????? ???????? ?????? ??????’?? ?????????????? ???????????? ???????????????? ?????????????????? ?????????????? ?? On November 20-21, 2024, the FDA hosted its inaugural ?????????????? ???????????? ???????????????? ?????????????????? ?????????????? to tackle one of the most pressing challenges in healthcare: the regulation of Generative AI-enabled devices. This meeting marked a pivotal moment in how we think about the lifecycle of AI in healthcare, but it also highlighted critical blind spots and paradigm shifts. ?? Key Insights from the Meeting: 1?? ?? ???????????????? ??????????: ???????? ?????????????????? ???? ???????????????????? ???????????????????? Generative AI, unlike traditional devices or drugs, is probabilistic and dynamic, requiring robust postmarket monitoring. This marks a significant shift from the FDA's historically premarket-focused regulatory model. 2?? ???????????????????? ???? ≠ ?????? ???? It’s encouraging to see the FDA address Generative AI specifically, acknowledging that "AI" isn’t a monolith. The nuances of GenAI demand tailored regulatory approaches to avoid stifling innovation with blanket rules designed for other AI technologies. 3?? ?????? ???????????????????????? ?????????????????? A key recommendation was for full disclosure of datasets used to train AI models. However, this is often impractical, especially with off-the-shelf LLMs like Nabla's use of Whisper, trained on vast, undisclosed datasets. Regulators may need to adjust expectations to address these realities without compromising safety. 4?? ?????? ?????????? ?????????????????? ?????????????? The meeting emphasized extensive human oversight, but requiring intervention for every AI decision is neither scalable nor effective. A better focus could be continuous monitoring systems to flag anomalies early, reducing risks like monitoring fatigue and unchecked human approvals. 5?? ???????????????? ???? ?????? ???????? #??: ???? ?????????? ?????????????? ??????’?? ?????????? Many rapidly adopted AI tools — such as scribes, administrative systems, and decision-support tools — fall outside the FDA’s SaMD purview, despite their substantial downstream impact on patient safety. These tools are instead expected to be governed by other regulatory bodies like CMS, OCR, or ASTP/ONC. 6?? ???????????????? ???? ?????? ???????? #??: ???????????? ???? ???????????????????????????? ?????????????? How will changing administrations reshape AI regulation? A new leadership could bring shifts in priorities and policies, potentially altering the trajectory of current frameworks. ?? ???????? ?????? ???????? ???????????????? ???? ?????? ??????’?? ????????????????? https://lnkd.in/dkJ9HRpZ
November 20-21, 2024: Digital Health Advisory Committee Meeting
fda.gov
-
? ???????????? ???? ???????????????????? ????: ?????? ???????????????????? ???????????????????? ???? ???????????? ???? ???????????????? ?? At #CHARGE, we’ve spotlighted Chief AI Officers and Health AI attorneys shaping the governance of health AI. Today, we turn our attention to practicing physicians - a critical yet often-overlooked group directly engaged in patient care, where the true impact of AI tools will be felt. Tasked with integrating these technologies into their workflows, they offer a boots-on-the-ground perspective on how AI can improve outcomes and where it may fall short. Their insights keep us focused on creating AI tools that truly enhance patient care while addressing real-world challenges. That’s why we’ve put together this list of ?????? ???????????????????? ???????????????????? ???? ???????????? ???? ????????????????: ? Adir Sommer, MD- Ophthalmology resident at Rambam Health Care Campus, and co-developer of the #OPTICA framework for clinical AI, published in NEJM AI. ? Aditya (Adi) Kale - Radiology fellow at NIHR with specific focus in AI and patient safety. ? Amit Kumar Dey - Diabetes specialist, founder of #Doctors_AI, and advocate for clinical AI adoption. ? Annabelle Painter - GP registrar at NHS, CMO of Visiba UK, and host of the Royal Society of Medicine Digital Health Podcast. ? Benjamin Schwartz, MD, MBA - Orthopedic surgeon and a digital health advisor. ? Eric Rothschild, MD - OB/GYN and advisor writing on healthcare and AI intersections. ? Graham Walker, MD - Emergency Physician and AI innovation leader at The Permanente Medical Group, Inc. ? Jacob Kantrowitz - Primary care physician at Tufts Medicine and co-founder and CMO at River Records. ? James Barry, MD, MBA - Neonatologist and NICU director at UCHealth, and co-founder of #NeoMIND_AI. ? Jesse Ehrenfeld MD MPH - Anesthesiologist, American Medical Association immediate past president, and advocate for health AI policy. ? Josh Au Yeung - Neurology registrar, Dev&Doc podcast host, and clinical lead at TORTUS. ? LUKASZ KOWALCZYK MD - Gastroenterologist and consultant for health AI development and strategy. ? Morgan Jeffries - Neurologist and Associate Medical Director for AI at Geisinger. ? Piyush Mathur - Anesthesiologist at Cleveland Clinic and co-founder of BrainX AI. ? R. Ryan Sadeghian, MD,- Pediatrician and CMIO The University of Toledo, applying AI to clinical practice. ? Shelly Sharma - Radiologist advancing AI applications in radiology. ? Spencer Dorn - Vice Chair of the Department of Medicine University of North Carolina at Chapel Hill and AI thought leader. ? Susan Shelmerdine - Radiology professor and AI advisor at The Royal College of Radiologists. ? Tina Shah MD MPH - Pulmonary physician at RWJBarnabas Health and CCO of Abridge. ? Yair Saperstein, MD MPH - Hospitalist at Mount Sinai Health System and founder of Avo.
-
?? ???????????????????? ???? ???????????????? ???????????????????? ?????????????????????? ???? ????????????????’?? ???????????????? ?????? ?????????????? ?? This article by Katie Palmer in STAT highlights a fascinating and impactful use case of generative AI in healthcare: Children's Hospital Los Angeles (CHLA) is piloting a program to translate discharge notes into Spanish using AI, aiming to improve care for patients with limited English proficiency. What makes this story remarkable: 1?? ???????????????????? ???? ???????????????? ??????????????????????: While many fear AI may perpetuate bias and discrimination due to skewed datasets, this is a case where AI is actively addressing healthcare disparities. In a diverse city like Los Angeles, where 60% of CHLA’s patients speak Spanish, this program could make a critical difference. 2?? ?????????????????? ?????? ???????????????????? ???????? ?????? ?????????????? ????????: The pilot also highlights how organizations are gearing up for the nondiscrimination rule under Section 1557, set to take effect in 2025. While this specific initiative focuses on document translation requirements, it’s encouraging to see health systems like CHLA aligning with broader compliance mandates. This suggests readiness for other parts of the regulation, including provisions on AI systems and clinical algorithms. 3?? ???????????????????? ?????????? & ???????????????????? ??????????????????: The hospital’s cautious approach reflects the current lack of universal best practices in AI compliance. By conducting biweekly audits of AI-translated discharge notes and involving patient focus groups and human translators, CHLA is setting an example of how to test and implement such tools responsibly. As CHLA’s Troy McGuire pointed out, this pilot represents the first time he’s been on board with a machine translation tool. Organizations like CHLA and Seattle Children's are transforming patient care by using generative AI to address language barriers. Beyond regulatory compliance, these efforts are setting the stage for more inclusive and equitable healthcare communication. Congratulations to Jaide Health and Joe Corkery, MD for their work on this pilot. Tools like these represent the first steps toward more equitable, accessible healthcare systems. Read the full article here: https://lnkd.in/dQunAMtd #GenerativeAI #HealthcareEquity #AICompliance #OCR1557 #AIInMedicine?
-
A lot of EHRs were built without the clinician or the patient in mind. The rapid adoption driven by the #HITECH Act prioritized turning clinical documentation into claims and meeting basic Meaningful Use requirements. Unfortunately, this led to significant downstream effects like poor usability, physician burnout, and even safety issues. AI has the potential to address many of these challenges (like Graham Walker, MD also highlighted in his recent post - see comments). However, it’s equally important to ensure that AI systems themselves are developed and implemented with clinicians and patients at the center. This isn’t just about fixing the shortcomings of EHRs; it’s about preventing AI systems, now being rapidly adopted for clinical and administrative purposes, from introducing new broken workflows driven by misaligned incentives. #healthcareAI #EHR #AIgovernance #patientcenteredcare
EHRs were never truly designed for clinical care—they were built to digitize outdated paper workflows, with little regard for what clinicians actually need. And thanks to government incentives focused solely on “uptake,” these systems got the green light without ever having to improve our clinical workflows or information management. Think about it: when you look at an EHR today, what do you see? Data is sorted by type, date, or status—like an endless spreadsheet of disconnected elements. But where’s the purpose? Where’s the problem-oriented structure that reflects how we actually think in medicine? Labs, orders, and notes are all ordered and written for a reason, tagged to a medical problem or covering a set of issues, yet they’re often buried and scattered across the record in a way that loses clinical meaning. It’s time we move past these systems and start designing tools that support clinicians rather than forcing them into workflows that make little sense. The clinical information management system should be designed with problem-oriented care at its core - not data tables - where information serves its purpose and clinicians can focus on what they do best—caring for patients. https://lnkd.in/enU4ZKmn #healthcareinnovation #EHR #digitalhealth #clinicianworkflows #RiverRiver Records #medicalAI #betterCharts
-
???????? "?????????? ???? ?????? ????????" ?????????????? ?????? ?????????????? In AI governance and ethics, "human in the loop" is often held up as a safeguard—a way to ensure oversight and ethical deployment. But the 2020 Practice Fusion case reminds us of a critical truth: human involvement doesn’t always prevent harm. Sometimes, it causes it. During the height of the opioid crisis, Practice Fusion and Purdue Pharma collaborated to exploit clinical decision support (CDS) alerts in electronic health records. Instead of aiding physicians with unbiased recommendations, these alerts were deliberately engineered to push opioid prescriptions. This was no technological accident. It was a calculated strategy, where human oversight amplified harm rather than mitigating it. The consequences were significant. Practice Fusion paid $145 million to resolve criminal and civil investigations, including a $25 million fine and stringent compliance requirements under a deferred prosecution agreement. These measures aim to prevent such abuses in the future. But the broader lesson is clear: technology, including AI, is neutral - it reflects the intentions of those who control it. As we advance AI in healthcare, we must critically examine the role of human oversight. Who is the "human in the loop"? Are their motivations ethical? Governance frameworks must address not only oversight mechanisms but also the ethical accountability of the humans behind them. Sometimes, the problem isn’t the algorithm - it’s us. ?? ?????????????? ?????????? ?????? ???????????????? ???????????? ???????? ???? ?????? ????????????????. #AIEthics #AIGovernance #HealthAI #OpioidCrisis
-
? ???????? ???? ???????????? ?????????????????? ???????? ?????? ???????????? ???? ????????????????????? ? As the recent elections reshape the political landscape, the implications for health AI regulation are a key question waiting to unfold. At #CHARGE, we align with the perspective of Micky Tripathi, director of Assistant Secretary for Technology Policy, who recently stated that he anticipates "a certain continuity of the policies" regardless of who takes the White House. We believe the trend toward regulating health AI will remain steady, despite political shifts. Here’s why: ??. ?? ???????????????????? ???????????????????? ?????? ?????????????? ???? ???????????????????? AI governance has been a rare area of bipartisan collaboration in Congress, highlighted by the launch of the Joint Task Force on AI in early 2024. A glimpse into the GOP's stance on AI regulation can perhaps be seen in the #Texas_Responsible_AI_Governance_Act draft, which draws from the #EU_AI_Act and addresses high-risk systems, transparency, and discrimination mitigation. With the new Republican majority, Congress could advance comprehensive legislation to establish a unified federal framework for AI governance. ??. ??????????-?????????? ???????????????????? ???????? ?????????????? California has already been a leader in AI governance, and now states like Texas are following suit. As federal dynamics evolve, state-driven regulation will likely remain a powerful force in shaping health AI standards. ??. ?????????????????? ???????????? ?????? ???????????????????? ???????????? The first U.S. executive order on AI was issued by the Trump administration in 2019, emphasized "American values" and robust, trustworthy AI. President Biden expanded on this in late 2023, reflecting technological advancements and public interest. The new administration is likely to build on these foundations, maintaining a focus on safe and ethical AI. ??. ????????’?? ???????????????????? ?????????????? ?????? ???????????????????? Elon Musk, anticipated to play a significant role in the new administration, has been an advocate for AI restraint. His support for California’s #SB_1047 (vetoed by Governor Gavin Newsom) demonstrates his openness to pro-regulation measures in AI, even when it seems counterintuitive to his broader reputation. ??. ???????????????????? ?????????? ???? ?? ??????-?????????????????? ?????????????????????? If federal regulatory bodies are reduced in scope, healthcare providers and payers should brace for a potential rise in patient-initiated litigation. Robust AI governance mechanisms won’t just ensure compliance - they’ll be essential for mitigating legal risks. The bottom line? While the political winds may shift, the demand for responsible health AI governance could remain strong. Whether through federal legislation, state initiatives, or increased litigation, transparency, fairness, and patient safety will stay at the forefront.
-
?? FDA’s Vision for Responsible Generative AI in Healthcare ?? In a pivotal piece by FDA officials, Haider Warraich, Troy Tazbaz, and Robert Califf, published in JAMA, the agency outlines a forward-looking strategy for #generative_AI in healthcare. Recognizing the immense potential and unpredictability of Large Language Models, the FDA emphasizes that healthcare AI must be managed with rigorous life cycle oversight—not just at launch, but through continuous monitoring and adaptation to safeguard patients. One key aspect of the FDA’s approach is the emphasis on #post_market_surveillance, which is set to reshape AI regulation in healthcare. Unlike traditional medical devices, AI models require local, ongoing evaluation as their performance can shift over time and vary across patient populations. This will necessitate active collaboration between AI vendors and healthcare providers, such as hospitals and health systems, where these tools are deployed. This unique dynamic raises questions about accountability, as the FDA typically places regulatory responsibility solely on manufacturers. However, in the case of evolving AI tools, Commissioner Robert Califf has already hinted at a shift, stating back in September, “I think there’s a lot of good reason for health systems to be concerned that if they don’t step up, they’re going to end up holding the bag on liability when these algorithms go wrong.” The FDA’s evolving stance suggests a future where healthcare providers may play an active role in AI oversight, sharing the responsibility for ensuring safety and performance in real-world applications. This shift could redefine regulatory accountability in healthcare AI, underscoring the importance of continuous, responsible collaboration among all stakeholders. Link to the full article in the comments ?? #CHARGE #FDA #AIGovernance #GenerativeAI #AICompliance #HealthcareAI
-
This is a strong overview of some of the key players shaping the health AI governance space. At CHARGE, we would add that ONC/Assistant Secretary for Technology Policy has already laid out a well-defined strategy in this area, especially with the upcoming HTI-1 mandate. Starting January 1, 2025, health technology vendors must ensure compliance as per the HTI-1 final rule. Specifically, ONC will certify predictive decision support interventions (Predictive DSIs) through its Authorized Certification Bodies (ONC-ACBs) within the Certified Health IT program. On the FDA side, efforts are also advancing, particularly around the regulation of generative AI technologies. The FDA’s Digital Health Advisory Committee (DHAC) will be meeting in November to discuss total product lifecycle considerations for generative AI-enabled devices. For more details, the agency has published a comprehensive Executive Summary for this meeting (link in the comments). #HealthcareAI #AIGovernance #DigitalHealth #FDA #ONC #HealthTech #CHARGE
There's a secondary AI land grab happening in health care to commercialize "governance" of AI software. You might be surprised who's staking out positions in the nascent Health AI Governance market: (All links in comments) 1. CHAI (Coalition for Health AI) was founded very recently, is very active, and has a ton of high impact academic and industry membership developing guidelines and certification frameworks. Some of the output is borderline pedantic, but they're iterating fast and trying to get practical with an "Assurance Labs" program akin to ONC-ATLs. 2. TJC (The Joint Commission) is the hospital accreditation juggernaut and in late 2023 they announced a new certification in relation to AI: Responsible Use of Health Data Certification. Doesn't seem to get into model measurement, but overlaps upstream with data use. 3. Avanade SAIGE ("Smart AI Governance Engine" from Duke, Microsoft, Accenture) was announced in the Sphere at HLTH this week and so far seems, like the Sphere itself, to be hollow. The promo video is set to an anthemic score and b-roll footage is so mid. But never underestimate Duke or Microsoft, so we'll see. The focus seems to be registration and control of AI rather than measurement, for now. Makes sense: measurement is way harder. 4. DiME Seal (Digital Medicine Society Seal) is meant to be an attestation of quality rather than a governance model, product, or service. It was announced this month and appears to have 15 products that have gone through it. 5. Aidoc BRIDGE with NVIDIA (Blueprint for Resilient Integration and Deployment of Guided Excellence). I don't know what to say about this one, still trying to make sense of it after their HLTH announcement. 6. Epic Siesmometer (their one and only open source project) is focused on measurement rather than governance, which makes a lot of sense since the EMR is the cosmic microwave background of health care workflows. Instrumentation must happen in the EMR. 7. HIMSS AMAM (Analytics Maturity Assessment Model) is... I don't know what. The EMRAM certainly had traction for EMR adoption, but it's unclear if HIMSS can use their existing distribution to drive adoption of AMAM when it seems a bit off the AI mark. Valid AI was launched out of UC Davis earlier this year, but seems to be inactive***. Surprisingly to some and perhaps expected by others, the NCQA and AMIA don't seem to have staked out clear positions or products or services just yet. ASTP/ONC and the FDA are the obvious federal forces governing this territory. It strikes me as tricky, let's say, to have so much overlap in quality assurance products and services right now when the sheriffs in town don't quite have things figured out. It's a sign we're living in the Wild West of health care AI. Time will tell where all these "governance" products and services go. My hope is we do not end up in another CQM-like boondoggle where the numbers often mean more to CFOs than patients. *** CORRECTION: Valid AI is active.
-
?? Healthcare AI Compliance Survey: Industry Leaders Wanted! ?? As the healthcare landscape rapidly evolves, regulations like the 1557 rule for decision support tools and the DOJ's updated ECCP guidelines are set to transform how AI is governed in healthcare. We’re conducting the first-ever industry-wide survey to capture how health systems and health plans are preparing for these pivotal changes. If you're a compliance leader, legal expert, or AI/data leader in healthcare, this is your chance to shape the future of AI compliance. Your insights will help set a benchmark for how the industry navigates the upcoming regulatory challenges. ?? Want to be a part of this? Fill out the quick form below, and we’ll reach out for a qualitative interview. Don’t miss the chance to have your voice heard in this critical industry moment. https://lnkd.in/dKUVph8m (Limited spaces — please fill out the form if you're interested) #HealthTech #AICompliance #HealthAI #HealthcareRegulation #AIinHealthcare #AIGovernance
CHARGE - Healthcare AI Compliance Survey
docs.google.com