These 14 Health AI Companies Have Been Lying About What Their AI Can Do (Part 1 of 2)

These 14 Health AI Companies Have Been Lying About What Their AI Can Do (Part 1 of 2)

The first health AI company in history is slammed with a lawsuit over 'deceptive claims' about its so-called 99.999% model accuracy.


In this two-part article, I’m investigating the world of health AI companies that lied to clinicians and patients about their AI models' accuracy—or worse—companies that have outright stolen someone else’s AI.

This investigative report comes with three key services:

1?? I’m going to assist OAGs, OIGs, and other regulators across the country by exposing AI companies that make false claims about the accuracy of their health AI products.

2?? I’ll break down, statistically, why you should never trust claims of 100% or even 90% accuracy from an AI model—at least not without serious skepticism.

3?? I’ll arm you with 6 key questions to ask if someone tries to sell you on “90% AI accuracy.”


As always, I’m open to strong feedback—something along the lines of: “You jerk, you don’t know what you’re talking about. Here’s the empirical proof. Here’s the statistically validated, peer-reviewed research. Here’s the data.” If I’m wrong, I’m man enough to apologize. It’s happened before. I called out a company, and the CEO reached out—not to call me a jerk, though maybe he wanted to—but to provide research that defended their claims. While I wasn’t 100% convinced, I still apologized. We’re all human, and part of the same community, so respect and courtesy are non-negotiable in my book.


That said, if history is any guide, I tend to be spot-on in my analysis. I dedicate a lot of time to investigation, extraction, and fact-checking.

But before we dive into this critical topic, a few housekeeping items...


If you're on Twitter/X and are free tonight, Wed, Oct 2, 2024, at 8 pm EST, I'll be debating Sanat Dixit MD, MBA, FACS on "Will healthcare AI replace physicians?" You can join here: https://x.com/i/spaces/1gqxvNBbmmexB or via my Twitter/X account @AIHealthUncut .

I have to say, this past week after my article dropped was one of the wildest yet. 'Selling The New o1 ‘Strawberry’ to Healthcare Is a Crime, No Matter What OpenAI Says ?? ' went viral—reprints, restacks, reposts, and fiery debates everywhere. Let me just be clear: I still stand by my core argument that OpenAI misrepresented the o1 model’s abilities in medical diagnostics. That said, I respect all the opinions being thrown around, and there are fine people on both sides. (It sounded funnier in my head, but now I’m not so sure. ??)


I also want to give a shoutout to those who cited my article and sparked some great discussions, regardless of whether they agreed with me or not.


?? I’m honored to have been invited to co-author this article with the esteemed expert and star author, Devansh Devansh , for his renowned Substack publication, Artificial Intelligence Made Simple . This publication covers everything you need to know about the hottest AI topics and the complexities of machine learning models.

?? The article also went viral, prompting Devansh Devansh to write a follow-up where he dove even deeper into OpenAI ’s apparent sloppiness in applying their newest o1 Strawberry model to medical diagnostics.

?? In a brilliant article that cites both my work and Devansh Devansh ’s, James Wang argues that "connectionist reasoning" approaches like LLMs may not be the right path for fields like law and medicine, where being “approximately right” just doesn’t cut it.

?? Another fascinating perspective comes from Jurgen Gravestein in his piece “The AI Bubble .” He discusses AI expectations versus AI reality, with the OpenAI o1 Strawberry blunder being a prime example, and the post-bubble realism emerging in its wake.

?? Dr Terence Tan , an AWS “healthcare wrangler,” wrote a thoughtful LinkedIn post summarizing my joint article with Devansh Devansh . His TL;DR sparked a lot of provocative discussions.

?? Yudara Kularathne MD, FAMS(EM) , an ER physician and CEO of HeHealth , highlighted the significance of these discussions in a provocative LinkedIn post about AI’s role in healthcare.

?? David Talby , CTO of John Snow Labs , a health-focused AI company, summarized my article in a LinkedIn post that drew a lot of attention.

?? Pramodith B. , an AI engineer, wrote a compelling LinkedIn post about my article , emphasizing that, just like with humans, no matter how convincing or seemingly rational the AI’s chain of thought (CoT) may be, it can still be wrong.

Thank you, Devansh Devansh , James Wang , Dr Terence Tan , Yudara Kularathne MD, FAMS(EM) , David Talby , Pramodith B. , and many others for supporting and contributing to this crucial area of AI development in medicine.


Alright, enough with the pleasantries. Let’s cut to the chase…


Continue reading at sergeiAI.substack.com ...

Robert Lienhard

Global Lead SAP Talent Attraction??Passionate about the human-centric approach in AI and Industry 5.0??Servant Leadership & Emotional Intelligence Advocate??Convinced Humanist & Libertarian??

1 个月

Sergei, it's commendable that you're addressing such a crucial issue in the health AI industry. Misleading claims about AI accuracy not only risk patient safety but erode trust in the technology. Your investigation into exposing companies that exaggerate their model's performance is vital, especially in a field where accuracy directly impacts lives. Providing regulators with the tools to hold these companies accountable is a significant step toward ensuring integrity in healthcare AI. Your breakdown of why AI accuracy claims should be questioned is invaluable. Many are unaware of how to critically assess these numbers, and your six key questions will empower them to do so. It’s great to see your openness to feedback, which shows a commitment to constructive dialogue and maintaining the high standards our field requires. Thank you for your work in keeping AI in healthcare honest.

Mandy Gao - PMP, ACP

Mental Health | Gaming | Gamification | HealthTech | Diversity Inclusion | Mother | ex Macquarie ??

1 个月
回复
Dr Terence Tan

Physician Defector | J-Apac Head of Healthcare & Lifesciences @ AWS | Healthcare wrangler

1 个月

Thanks for the shoutout Sergei Polevikov, ABD, MBA, MS, MA ???????? and really everyone who participated in a constructive and really helpful conversation!

Hung (Leo) Chan

Investor and finance professor who is passionate about AI, Machine Learning, and futurism. All posts I made here represent my personal view, not the view of my past, current, or future employer.

1 个月

The models are only as good as the data used. I would be more cautious about the skepticism of hospitals.

Wen Profiri

AI Tooling Medicare: Dementia | Caregiver Support | Health Workers Training

1 个月

This morning I saw the news of AI hospital in Beijing claiming their 14 AI doctors and 4 AI nurses are seeing 10,000 patients in few days. My friend told me this a few months ago, and I said Andrew Ong has been leading the Agentic AI hospital projects in Singapore for a few years now. Maybe it takes long time to develop empirical evidences to prove at least one agent could work? On a different note, as my own experience of continously improving the 900+ AI healthcare job roles, I found some user experience secret sauce in prompting for some tasks. I have applied 5 NIH grants and 2 Stanford grants so far, nothing about the job copiloting. They are about using social media platform with AI for patient and family caregiver education and community health training. I often see the inconsistency of output no matter how I restrict in memory or fine tuning in prompts. Unless you are already a senior expert in the specialty domain, you may not notice the difference. But it makes a big difference when dealing with real jobs.

要查看或添加评论,请登录

Sergei Polevikov, ABD, MBA, MS, MA ????????的更多文章

社区洞察

其他会员也浏览了