AI in health care: boundless potential meets cautious optimism

AI in health care: boundless potential meets cautious optimism


From the front lines of providers to the back offices of payers, generative AI (GenAI) is poised to be a transformative force across the health care and life sciences . sectors— streamlining administrative tasks and redefining patient experiences, if handled thoughtfully and responsibly. In a heavily regulated industry like ours, introducing artificial intelligence (AI) into the workplace can create pathways to better care as well as tensions with employees and risks that must be managed.

Consolidating the views of employees across ranks and sectors, the pioneering EY AI Anxiety in Business Survey broadly reveals how people think about AI and its impact on their jobs — and considerations about how companies should communicate their AI intentions for greater success. A deeper look at the responses of 66 health care and life sciences executives at the VP level or above specifically reveals that 85% think AI adoption is not happening quick enough in the workplace. Further, 89% are anxious about falling behind and losing out on promotions without AI technology and more than half (53%) report having incorporated AI into their daily work.

Here are two trends worth highlighting as hospitals, insurers, pharma players and others plot their AI journeys.

1. AI garners widespread trust — but not enough for patient care currently

Nearly every health care executive surveyed (96%) say they trust AI, with 94% seeing it as a positive force in their workplace and 91% believing it has a positive effect on society as a whole. This is a promising foundation among executives near the highest ranks for building a more efficient future for the sector.?

Notably, 95% of the executives have advocated for the use of AI in their organizations, and around 72% of their organizations are using GenAI currently. Furthermore, 94% of the health care respondents believe that AI will enhance productivity and efficiency and allow them to focus on higher-value tasks.?

The next step for executives is crafting the right value-added roadmap for moving forward amid such a high level of engagement. But the data reveals negative aspects as well, highlighting the importance for? crafting a culture of awareness to understand how GenAI works, how it abides by regulation, and how to scale responsibly while adding value.

A significant 85% of health care executives express worry about AI in the workplace in general, with this fear increasing for 62% of executives compared with the previous year. And 83% are concerned about AI being involved in personalizing medical plans or aiding in medical diagnoses. An overwhelming 91% believe AI needs human supervision, highlighting the importance of a human element in AI applications.

This suggests that while the sector may be ready to embrace AI for administrative tasks (such as appointment scheduling, back-office functions and perhaps even patient education), barriers remain when considering the technology for helping with tasks directly tied to patient outcomes. For instance, 88% are comfortable with the use of AI in the hiring process, and over 50% prefer for the technology to be leveraged in tasks such as creating meeting notes, answering basic customer questions, analyzing data and forecasting trends, versus having humans perform those tasks on their own.

This aligns with a recent EY report on building the GenAI foundation in health care, showing how back-office use cases — such as billing, claims, waste reduction, scheduling and compliance — provide an entry point for getting comfortable with the technology and instituting proper data governance protocols. GenAI also shows promise in health care supply chains and drug discovery, while the next generation of patient care will likely rely on leveraging the technology with real-time, accurate biometric data through wearables. This evolution of patient monitoring can help prevent health conditions from escalating to crisis levels and move toward better quality of care.

2. Responsible AI, ethics and transparency should be core to the mission

But before we reach that point, health care organizations must establish frameworks for protecting stakeholders and mitigating risks. In the same sample of executives, 85% say they would have a more positive view of an organization that uses AI if they focused on ethics and responsibility and were transparent about its use.?

In the overall survey, employees across sectors and generations shared that they wanted to know more about leading AI practices through routine updates, given how quickly the technology is evolving. Employees also express trepidation about what constitutes responsible, legal and ethical use and what does not. Specifically, they want more training in the legal implications of AI, cybersecurity risks and AI ethics, the survey showed.

A comprehensive AI governance framework in health care requires embracing responsible AI principles like transparency, fairness and human-centricity — in turn driving security, sustainability and explainability. Such a framework should be founded in a health care organization’s values and duty to comply with regulation, yet also be flexible enough to inspire further innovation and continue to evolve in a dynamic landscape.?

This is no easy task, and often a trusted advisor can help develop this crucial roadmap. The EY principles on AI are publicly available and guide everything we do with the technology. As just one example of our work with clients, our professionals assisted a pharma giant with its AI risk management in its industry, providing confidence in its approach and highlighting opportunities for improvement (especially as the European Union was unveiling new AI regulations). In another instance, our professionals helped an insurance provider use natural language processing to custom-train an algorithm to extract requirements that would then populate a central database, enabling the company’s analysts to rapidly search key words, phrases and characters by both state and business function. The new database significantly reduced the manual search time from about 250 hours to 15 minutes.

Overall, these findings show that while the positive impact of AI in health care is increasingly recognized, legitimate concerns remain. Addressing these issues — particularly those related to ethical and responsible use of AI, transparency in its applications, and the necessity of human oversight — will be key for cautiously navigating toward the next evolution of care.

Thanks to Sezin Palmer , Ricardo Vilanova and Ziv Yaar for their input into this article. The views expressed in this article are mine and do not necessarily represent the views of Ernst & Young LLP (US) or other members of the global 安永 organization.

Mitch Berlin

Vice Chair - EY Americas Strategy and Transactions

7 个月

The potential applications for AI in health care are vast; however, there are many factors organizations need to consider when developing and executing an AI strategy that is both responsible and transparent – great article.

Jessica Bennett

Simplify nurturing old & new leads with email marketing. | Speak your customers' language | Content & communication strategies | Email Marketing | Conversion Copywriting | Technical Software Learning Specialist Trainer

8 个月

Yes yes yes!! Same sentiment applies to all fields as well

Idrees Mohammed

Try "midoc.ai”- AI based patient centric healthcare App. | Founder @The Cloud Intelligence Inc.| AI-Driven Healthcare

8 个月

It's crucial to integrate AI responsibly in healthcare by fostering transparency and accountability. Striking a balance between leveraging AI's efficiency while ensuring human oversight is essential. Robust ethical frameworks, ongoing monitoring, and clear communication with patients can mitigate concerns and build trust Arda Ural, MSc, MBA, PhD

Berna Demiray, PhD

Leading Innovative Therapies through Strategic Initiatives and Executive Leadership

8 个月

An interesting though experiment on the contradictory trust vs hesitation around AI. It seems widely acknowledged that it is coming (really, it has already arrived) and that it will have benefits, but our greatest challenge is to alleviate the concerns around rogue applications, or lack or oversight from poorly trained AI. What checks and balances do we need to incorporate in order to ensure we can benefit from the optimal application of a technology with so much potential!?

Cenk Sumen

Transmogrifying

8 个月

We should not expect AI to judge on ethics, diversity, and other social issues the same way we don’t expect calculators to determine policy or write our poetry. The focus should be on scaling rote operations and reducing burnout from administrative overload. I’m also surprised that only 91% of polled executives felt that AI needed human supervision. The way all writers need (and benefit from) editorial oversight, AI needs (and will benefit from) domain-specific expert oversight and guidance.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了