"So far, AI usage models are pay-per-use and are not like traditional software with fixed licensing fees. So, the more an organization integrates AI into daily workflows, the higher the financial burden becomes. Unless hospitals and healthcare providers negotiate cost-effective pricing structures, implement usage controls or develop in-house AI systems, they may find themselves in a situation where AI adoption leads to escalating operational costs rather than the anticipated savings." (Rodriguez Ronald, UT Health San Antonio) https://lnkd.in/gahpaWA9
CHARGE - Center for Health AI Regulation, Governance & Ethics
健康与公共事业
Boston,MA 1,114 位关注者
Exploring healthcare AI regulation, governance, ethics & safety standards
关于我们
CHARGE is a community dedicated to fostering meaningful discussions on health AI regulation, governance, ethics, compliance & safety. We bring together healthcare stakeholders — including policymakers, compliance and ethics leaders, clinicians, data professionals, and AI vendors — to collaboratively explore the evolving challenges and opportunities in health AI. Through shared insights and expertise, CHARGE aims to shape a responsible, transparent, and ethical future for AI in healthcare.
- 网站
-
chargeai.org
CHARGE - Center for Health AI Regulation, Governance & Ethics的外部链接
- 所属行业
- 健康与公共事业
- 规模
- 2-10 人
- 总部
- Boston,MA
- 类型
- 教育机构
- 创立
- 2024
地点
-
主要
US,MA,Boston
CHARGE - Center for Health AI Regulation, Governance & Ethics员工
动态
-
UPMC has developed a virtual environment, known as Ahavi, specifically designed to validate health AI models. According to Jeffrey Jones, SVP of Product Development at UPMC Enterprises, "This is an environment that allows our organization to assess the efficacy of AI models against our patient population prior to ever having to deploy it against our actual population." https://lnkd.in/e522KfVB
-
Jason Hill, Ochsner Health’s innovation officer, said he goes to sleep most nights and wakes up most morning worried about one thing: the state of generative AI governance in healthcare. To him, providers and other healthcare organizations are in dire need of frameworks to ensure their AI tools are safe and perform well over time.
-
Certain AI regulations in healthcare mandate disclaimers when using AI. For example, California's recently enacted AB 3030 requires healthcare organizations that utilize #generative_AI to create written or verbal patient communications involving clinical information to include a disclaimer explicitly stating that the communication was AI-generated. Such disclaimers, while essential for transparency and patient trust, can create significant operational challenges for health systems. They require the meticulous registration and oversight of all AI tools, as well as the creation and management of distinct workflows for each tool to consistently communicate disclaimers to patients. In this context, the recent paper "A Heuristic for Notifying Patients About AI: From Institutional Declarations to Informed Consent," published in The American Journal of Bioethics by Matthew Elmore, Nicoleta Economou, and Michael Pencina, provides a highly practical framework. The authors propose a structured heuristic to determine how and when to notify patients about AI use, balancing transparency and ethical considerations with operational feasibility. Their approach categorizes AI tools based on clinical risk and AI autonomy, suggesting tailored notification strategies ranging from broad institutional declarations to detailed, informed consent processes. This paper significantly contributes to simplifying compliance with regulations like AB 3030, helping healthcare providers operationalize mandatory disclaimers without compromising patient safety or trust. https://lnkd.in/gvM84ZkK
-
-
While the future of federal oversight of healthcare AI remains uncertain, state-level AI legislation continues to evolve. https://lnkd.in/d6G8BYcX
-
Primary care physicians see immense potential in AI - but they also have significant concerns, according to a recent survey by Rock Health and the American Academy of Family Physicians. While most primary care clinicians are optimistic about AI improving their clinical efficiency, workload, and personal wellbeing, they also highlighted serious concerns: ? 81% say they need more training to fully trust AI solutions. ? Nearly 70% want medico-legal protections before fully adopting AI. ? 64% want education on legal, liability, and malpractice risks. ? 68% seek ethical guidelines to ensure responsible AI use. Read more insights from the survey here: https://lnkd.in/gFpF-mCR
-
In a recent JAMA article, Peter Embí, M.D., M.S. and colleagues describe the launch of #TRAIN (Trustworthy and Responsible AI Network), a healthcare consortium established in 2024. TRAIN aims to develop collaborative governance frameworks, practical tools, and standardized approaches for AI deployment across healthcare systems. The consortium currently includes over 50 organizations, such as Vanderbilt University Medical Center, Duke University Health System, Advocate Health, UT Southwestern Medical Center, and Northwestern Medicine. The proliferation of such health system consortia focused on promoting trustworthy AI in healthcare - like the Coalition for Health AI (CHAI), Health AI Partnership, and VALID AI - highlights the pressing need for best practices in AI implementation within healthcare settings. Notably, Microsoft is among TRAIN's founding organizations, a similarity shared with CHAI, which also includes major technology companies like Microsoft and Google, as well as prominent healthcare systems actively involved in incubating AI ventures. This model, as promoted by CHAI, has previously faced criticism from Republican lawmakers who argued last year that it could place large organizations actively developing and commercializing AI models in the position of evaluating AI programs created by affiliated entities or competitors. https://lnkd.in/e58PW6Vk
-
-
Big Tech's data centres aren't just energy-intensive - they're increasingly recognized as a public health concern. While #AI_safety discussions in healthcare typically focus on direct impacts, we often overlook the hidden consequences of pollution associated with AI infrastructure. Recent research from University of California, Riverside and Caltech, led by Associate Professor Shaolei Ren, highlights this issue, revealing that pollution from data centres operated by tech giants such as Google, Microsoft, Meta, and Amazon has caused more than $5.4 billion in healthcare costs across the US over the past five years. Operating data centres requires significant amounts of electricity, much of which is generated from fossil fuels. Notably, a single #ChatGPT query consumes nearly ten times the electricity of a standard Google search. This reliance on fossil fuels results in greenhouse gas emissions linked to respiratory diseases, cancer, and other serious health conditions, particularly affecting communities situated near these facilities. https://lnkd.in/d8yxvw7a
-
According to a recent survey by the American Medical Association, 61% of physicians are concerned that health plans' increasing use of #AI for prior authorization is leading to more denials, exacerbating avoidable patient harm. “Using AI-enabled tools to automatically deny more and more needed care is not the reform of prior authorization physicians and patients are calling for,” said AMA President Bruce Scott, MD. “Emerging evidence shows that insurers use automated decision-making systems to create systematic batch denials with little or no human review, placing barriers between patients and necessary medical care. Medical decisions must be made by physicians and their patients without interference from unregulated and unsupervised AI technology.” This news comes as major insurers, including UnitedHealth and Humana, face class-action lawsuits alleging discriminatory use of AI in utilization management practices. https://lnkd.in/dNeqZgdJ
-
-
Coalition for Health AI (CHAI) is developing a model card registry to provide AI purchasers, such as health systems, with essential insights into AI models' training data, fairness metrics, and intended uses. AI vendors included in this registry receive a CHAI “stamp of approval” upon successfully completing a CHAI model card. This development by CHAI represents an important advancement in promoting transparency and streamlining AI procurement for healthcare organizations. It complements previous initiatives by Assistant Secretary for Technology Policy and aligns closely with the HTI-1 rule, which became effective in January. However, as the Fierce Healthcare article states, "The model registry does not solve the problem of validating the model, which requires evaluating the model’s performance against a locally representative data set, among other technical tests."