Why Is Shitty, Dangerous Health AI Being Pushed by Multi-Billion-Dollar Conglomerates?

Why Is Shitty, Dangerous Health AI Being Pushed by Multi-Billion-Dollar Conglomerates?

In April 2020, amid the pandemic's chaos, Cigna passed on my pitch for an AI solution that could've aided thousands of COVID patients. Their reason? They'd just onboarded 200 data scientists to craft their own innovations, brimming with anticipation for their self-proclaimed “great” creation. By the year's end, they dubbed this expanded team 'Evernorth', focusing on “health service solutions”. Fast forward 3.5 years, and the bitter truth surfaces: Cigna has yet to deliver a single AI product, let alone anything for COVID patients.


During this time, my team and I were knee-deep in developing the WellAI AI Medical Research Tool. We were collaborating with genetics researchers worldwide, leveraging the tool's potential to tackle questions beyond human capabilities. We sought to guide medical researchers in pinpointing focus areas critical in combating COVID. Specifically, this AI tool possessed a unique capability, unrivaled by any other machine or human: it could determine the optimal focus area for various medical professionals in the fight against COVID. Whether you're a genetics scientist, a behavioral psychiatrist, or any other type of medical researcher, this tool could pinpoint the exact area of your expertise most critical for combating the pandemic.


Our pitch to Cigna, naturally, was tailored differently. Understanding their need for an AI solution in their wellness platform, we believed our AI's ability to digest, summarize, and offer preliminary insights on vast amounts of medical data would be invaluable, especially during a health crisis of such magnitude.


Yet, the core question posed to us by Cigna’s data science leaders wasn't about the app's specifics, the system, the AI, or how we differed from the likes of WebMD. It was, "Don't you think our newly hired 200 data scientists could replicate and surpass your AI tool?” That question threw me off then, and it still does. It's a misconception to think that simply amassing data scientists can solve complex problems. Our team had dedicated over a year to perfecting our solution, with the majority of that time spent on refining the dataset alone. There's a relevant saying in engineering, which holds true for data scientists as well:

"What one engineer can solve in a week, two engineers will take two weeks to solve."

This reflects a crucial insight that startup founders understand but corporations often overlook: deploying 200 data scientists to tackle a problem, as opposed to just 2, doesn't guarantee that the problem will be resolved 100 times faster or more effectively. The dynamics of team size and productivity don't scale linearly. In fact, often, especially in a bureaucratic setting, they don't scale at all.


Sure enough, those 200 Cigna data scientists haven't produced a single AI solution in 3.5 years!


And as a side note, how come no one has ever heard about the 400 ChenMed data scientists, never saw a single publication, and never met a single one of them at the AI and data science conferences? Are they not allowed to leave the ChenMed basement? ?? (Reference: Chris Chen, ChenMed’s CEO, presentation on June 24, 2022, especially from minute 25:40.)


Joining a bureaucratic structure often means losing sight of one's impact and motivation, becoming just another cog in the machine, performing tasks as instructed, no more, no less.


Frankly, I'm exhausted discussing tech failures in healthcare – it's a topic I've covered extensively, yet the setbacks keep emerging. Nonetheless, these AI missteps by healthcare giants and the government are crucial to highlight and rectify.


(Quick side note: This article is just a fragment of the comprehensive report I'm preparing, titled “Digital Health 2024: Don’t Miss These 50 Names,” coming up in the next few weeks. Stay tuned!)


Anyway, let's delve deeper. Here's the outline of this article:


1. AI Failures by Multi-Billion Dollar Healthcare Corporations and the U.S. Government

1.1. A Bug in the 2016 Medicaid's AI algorithm cut off aid to 8,000 elderly and people with severe disabilities.

1.2. A 2019 UnitedHealth AI algorithm prioritized care for healthier white patients over sicker Black patients, impacting over 100 million patients.

1.3. Cigna sells patients’ lab results and claims information to WebMD.

1.4. Cigna's AI stress tool: shitty and potentially hazardous.

1.5. Cigna’s AI algorithm automatically denies claims. Cigna “doctors” concur without even looking.

1.6. Epic?added?AI-based voice assistant Suki to its EHR. Bad tech meets worse tech. Mind-boggling.

1.7. UnitedHealth pushed employees to follow Optum’s NaviHealth algorithm to cut off medicare patients’ rehab care.

2. So, Why Is Shitty, Dangerous Health AI Being Pushed by Multi-Billion-Dollar Conglomerates?

2.1. The Monopolistic Might: Complacency and Neglect in Big Healthcare

2.2. The AI Quagmire: Profit Over Patients

2.3. Too Big to Fail, Too Regulatory-Capture Driven to Care

2.4. Health Costs Are Rising and Health Conglomerates Are Consolidating

2.5. In a Bureaucracy Where Everyone Does Everything, No One Takes Responsibility for Anything

2.6. Healthcare Mega Corporations Don't Give a Shit About Best Practices for AI and AI Ethics

2.7. The Boardroom's Frenzy for AI

2.8. The Prior Authorization Mess

2.9. The Collateral Damage: Patients and Medical Providers

2.10. Stifling Innovation: The Shadow Over AI Startups

3. My Take


Brace yourself for a no-holds-barred voyage into the labyrinth of AI debacles, orchestrated by some of the biggest middlemen in healthcare history. We’re about to take a thrilling, eye-opening leap into a world where technology meets Epic (no pun intended ??) missteps:


Continue reading this story on my Substack at?sergeiAI.substack.com...


?????????? Hi! My name is Sergei Polevikov. In my newsletter ‘AI Health Uncut', I combine my knowledge of AI models with my unique skills in analyzing the financial health of digital health companies, no pun intended. Why “Uncut”? Because I never sugarcoat or filter the hard truth. Show your support for my work by subscribing to 'AI Health Uncut' either on LinkedIn, or on Substack at?sergeiAI.substack.com! ??????????

Rachel Schneider ????

Helping Entrepreneurs and Start-Up Founders Achieve Their Dreams. Building Sustainable Business Via Strategic Go-to-Market/Market Development - AI, SAAS, Tech, HR, Retail (etc) (AVAILABLE May 2025)

1 年

Slightly different take on this: It is the "NIH" syndrome - not invented here - ChenMed and Cigna are in the system - many startups are outside the system. If it is developed "in the system" or by big tech which is already "in the system" then they get preference. And it is "follow the money" on everything. There are plenty of startups that are equally as "incompetent" trying to opportunistically make money without putting patient care first. I still assert and have heard stories about startups that developed great solutions - usually by physicians, nurses, and healthcare practitioners that are doing well or are positioned to do well. Some are on the fringe of healthcare - not directly affecting patients, but affecting surrounding workflows. I get the impression healthcare is like a clique - know the right people, make the right friends, talk the right "language" and you are in - otherwise you are out. Maybe a little sour grapes here at play as well?

Rene Anand

Chief Executive Officer and Founder of Neurxstem Inc.

1 年

Cool - go with NI (natural intelligence) not AI if you don't want to become another bot controlled by the big Corp bosses with a chip implanted in your brain!

Hirdey Bhathal Ph.D.

CEO/Founder at ZibdyHealth

1 年

Sergei, Healthcare regulation makes it extremely hard for anyone without deep pockets to gain traction. Large conglomerates have deep pockets and they are using their tech company label to claim "AI expert" label then get a foot inside healthcare. These tech companies can't really consolidate healthcare data or clean >25% error rate in EHRs or remove race bias and they think they can use datasets for AI training. ?It is a land grab for later years.

Sergei Polevikov, ABD, MBA, MS, MA ????????

Author of 'Advancing AI in Healthcare' | Healthcare AI Fraud Investigator

1 年

Insurers cheat patients out of life-saving care - by Wendell Potter’s Healthcare Un-covered: https://open.substack.com/pub/wendellpotter/p/thanks-to-reporters-at-propublica

Narayanachar Murali

Gastroenterology/ GI Endoscopy / Hepatology / Clinical trials / New drug development/ New device development

1 年

Why are they doing this?... 1) Because they can and there is no accountability,. 2) Careless VCs who fund nonsensical projects hoping for unicorns. 3) Equally careless LPs who send money to VCs to create unicorns. It is a big scam out there.

要查看或添加评论,请登录

Sergei Polevikov, ABD, MBA, MS, MA ????????的更多文章

社区洞察

其他会员也浏览了