The Risks of False AI Software in Healthcare: A Call for Rigorous Standards and Validations

The Risks of False AI Software in Healthcare: A Call for Rigorous Standards and Validations


1.?????? The Role of AI in Healthcare and the Emergence of “False AI”

I started this article the first week of October 2024, a week later, the Royal Swedish Academy of Sciences (in charge of naming the winners of the Nobel Prizes) awarded the 2024 Nobel Prize in Physics to those who have laid the foundations for “machines to learn”, a key aspect for the development of artificial intelligence.? Professors John Hopfield and Geoffrey Hinton were awarded this Tuesday by the Royal Swedish Academy of Sciences for their "discoveries and inventions" that have been "fundamental for machine learning with artificial neural networks"[1].

The arrival of this type of technology, since it was made publicly available in 2022 with the first ChatGPT (but in medicine the use of predictive software goes back to the 1960′s), has transformed several industries, among which is the health sector, including hospital management, drug discovery, inclusion in medical devices, among others .

Artificial Intelligence (AI) is rapidly transforming healthcare businesses by providing innovative solutions across diagnostics, treatment recommendations, workflow management, and drug discovery. The technology has made significant steps, with AI-powered systems like machine learning (ML) and deep learning (DL), offering the potential to analyze large datasets, detect patterns, and deliver precise results in real-time, something traditional methods struggle to achieve. However, as with any emerging technology, the sudden proliferation of AI has led to exaggerated claims and the rise of “false AI” software—systems that either falsely claim to use AI or fail to perform as advertised.

We have all seen examples on social networks of the exaggerated promotion of online programs, software and devices that claim to contain Artificial Intelligence.? From photo manipulators for Linkedin (yes, they promise a better version of you in seconds), to 4-lead electrocardiographs that, with great certainty, cover most of the pathologies of acute and chronic cardiology.

Recently, I was at a very high-level meeting of the Ministry of Health of Costa Rica, in which a so-called "AI expert" brought medical equipment that had nothing to do with technology related to the prediction of any type of pathology, and it was just a transilluminator vein finder.? Unfortunately, no one could refute it because very few doctors in the audience knew the principles of artificial intelligence.? This is why I want to prepare this document to (as I tell my students), be able to show that "not everything that glitters in AI is gold."

You can infer that the risks associated with false AI software in healthcare are profound. Misleading claims not only risk patient safety, they put medical personnel at risk in the event of a possible lawsuit for medical malpractice, but also undermine the credibility of AI in the medical community, slowing down the adoption of truly transformative AI solutions.

As mentioned before, in this paper, I will examine some of the key risks posed by false AI software, provide examples of both failed and successful technologies, and propose best practices to avoid the dangers of misleading AI applications.

In LATAM, governments must be able to put more stricter regulatory oversight, rigorous validation, for companies that promote their products that use AI as part of their competitive benefits.

?

The Risks of False AI Software in Healthcare

At any healthcare service, is the ethical imperative of the Hippocratic Oath to “do no harm”[2]. Anything in healthcare must involves minimizing risks, providing full information, and respecting patients' autonomy in decision-making processes.

The incorporation of AI into the healthcare system, has been a hot topic and has captured a considerable amount of attention due to the recent advancements and implementation of this type of technology. As I have been writing in previous papers, AI is already being used in this industry since the 1970s, AI applications were first used to help with biomedical problems[3].

As of today, AI is primarily utilized to increase speed and accuracy in the healthcare realm.? ?Mostly, AI is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual process’s characteristic of humans, such as the ability to reason[4]. ?AI can perform highly analytical, responsive, and scalable tasks automatically and efficiently. Autonomous artificial intelligence is the advancement of this technology, where an autonomous system executes various actions to create an expected outcome, without further human intervention[5].

For the purposes of this discussion (and the rest of the papers I have published in my LinkedIn Page), we define AI as the ability of a computational system or medical device to process large volumes of healthcare data, uncover hidden insights, assess risks, and improve communication. This includes AI technologies such as machine learning and natural language processing. Machine learning (ML) allows computers to process both labeled (supervised learning) and unlabeled (unsupervised learning) data to discover underlying patterns or predict outcomes without needing specific programming instructions[6]. Among the various AI technologies, machine learning and natural language processing are particularly impactful in the healthcare sector, playing a significant role in improving clinical outcomes and operational efficiency and we will see some examples of this technology later in this article.

AI systems, when not properly validated or falsely advertised, can lead to significant risks in patient safety. One of the major concerns is that AI models may be trained on biased, incomplete, or poor-quality datasets, leading to inaccurate predictions. ?The consequences of poor data are not simply theoretical but have manifested in well-publicized failures. For example, Microsoft's AI chatbot Tay became infamous for expressing offensive remarks on social media due to the poor data quality it learned from. Similarly, Amazon had to retract its AI-based recruiting tool because it exhibited bias against female candidates, as it was primarily trained on data from male-dominated resumes[7].

In one case, a well-publicized machine learning algorithm that was intended to predict patient outcomes in critical care settings was later found to underperform significantly, particularly for minority populations[8].

This paper is intended to address some cases in which the reader can analyze cases of software designed to help institutions and patients with complex healthcare needs and be able to create a critical opinion of the importance of looking for reliable sources of AI topics.

We all must have the basic knowledge to detect false cases of AI in devices or applications where traditional statistical models are simply repackaged and sold as “AI” (like the college in the lecture I told you before). ?These tools, while potentially useful, do not provide the same level of pattern recognition or adaptability that true machine learning algorithms offer, which can create a false sense of accuracy and dependability. This is especially dangerous in areas such as imaging or predictive analytics, where AI’s potential impact on real-time clinical decisions is high.

The FDA has been reviewing AI/ML-enabled medical devices for over 25 years. FDA approved the first medical device incorporating AI/ML technology in 1995 (the year I became a Medical Doctor). Up to date, FDA has authorized more than 600 AI/ML-enabled medical devices, most of these devices are in radiology and clinical image analysis, but there are authorized devices in gastroenterology, ophthalmology, dermatology, cardiovascular diseases, oncology and neurology[9].


Regulatory Risks

The regulatory landscape for AI in healthcare is still evolving, and this presents an additional risk. Current regulatory frameworks (e.g., FDA, EMA) are designed to evaluate traditional medical devices and software, but AI-driven tools require unique considerations. These include algorithm transparency, performance in different patient populations, and the ability to adapt over time as more data is integrated into the system.

The FDA emphasizes the need for a regulatory framework based on solid principles, best practices, and advanced regulatory tools. This framework should be adaptable for different medical products while ensuring AI is used safely and ethically across healthcare applications. One key risk is the proliferation of AI tools that have not undergone thorough regulatory scrutiny. Some companies rush to market, claiming to use AI without the appropriate oversight, capitalizing on the AI “buzz” to attract investment or adoption[10].

Continuous post-deployment monitoring of advanced AI systems is crucial for maintaining their performance, reliability, and trustworthiness and allows monitoring to detect early possible unexpected biases or ethical issues that might emerge when the model interacts with diverse real-world scenarios. Furthermore, continuous monitoring is vital for detecting potential security threats, such as adversarial attacks, and ensuring the system's resilience against them[11].

While comprehensive AI-specific regulations are still under development in most of LATAM, countries like Brazil, Mexico, Chile, and Colombia are taking significant steps toward creating frameworks for responsible AI use. These initiatives focus on balancing innovation with ethical considerations such as data protection, bias, and transparency.? Unfortunately, I have no information on regulation basis on healthcare, patient protection or other areas in healthcare.

?

Characteristics of “False AI”

The rapid expansion of artificial intelligence within healthcare has led to the rise of “AI-washing,[12]” a term used to describe the practice of labeling products or services as AI-powered without a substantive basis. In these cases, vendors may use the AI label to increase marketability or secure investments, without demonstrating that their technology employs true machine learning or AI techniques.

"Fake AI" refers to systems that mimic intelligence using predefined rules or simple models without true learning or adaptability, as seen in some chatbots and virtual assistants. These systems lack the core capabilities of "real" AI, which includes independent learning and the ability to continuously adapt to new environments and data[13].

The debate over what constitutes "real AI" is ongoing, with no universally accepted definition of artificial intelligence. While some argue true AI must learn and adapt, others emphasize intelligent decision-making. Additionally, "AI-washing," where companies overstate the AI capabilities of their products, contributes to confusion and risks inflating expectations, leading to disappointment and loss of trust.

Generative AI, like ChatGPT, which creates new content from learned patterns, is considered an advanced form of AI, but focusing solely on generative abilities would be too limiting. Real AI encompasses a broader spectrum, including systems that can adapt, solve complex problems, and improve continuously. A comprehensive understanding of AI should account for its diverse approaches and capabilities.

?

Babylon Health, a UK-based digital health company, faced significant scrutiny after overstating the capabilities of its AI-powered chatbot, which was promoted as being on par with human doctors in diagnosing patients. Investigations revealed that the chatbot often misdiagnosed common conditions and provided inaccurate medical advice despite its sophistication, was not able to accurately diagnose complex medical conditions[14]. The case highlights the risks of AI washing in the industry, where overhyping AI technologies can lead to patient safety issues and erode trust in digital health innovations.

For future healthcare environments, this case serves as a cautionary tale, reinforcing the need for transparency, validation, and rigorous testing before introducing AI tools into clinical practice. Long-term, such failures could slow the adoption of genuinely transformative AI technologies due to skepticism and loss of confidence among healthcare professionals and patients.[15]

?

Lack of Peer-reviewed Validation

In healthcare, peer-reviewed validation is fundamental to the success and trustworthiness of any technology. True AI solutions in this sector must undergo the same rigorous standards as other traditional medical research products. This includes comprehensive clinical trials that evaluate the system’s efficacy across a wide range of populations, medical conditions, and clinical environments.

Several opportunities have been identified for AI’s assets in the discovery of areas where return on investment might not support profitability like targeted therapies and rare diseases[16]. ?In addition, anticipated increases in efficiency of patient recruitment and protocol design are suggested to improve chances of trial success, while monitoring of patients and analysis using AI may have the potential to positively impact measurement and interpretation of results[17].

?

One can create a red flag, for healthcare AI systems without of peer-reviewed validation. Many so-called AI tools marketed as advanced solutions often fail to demonstrate validation through published studies or real-world trials. This is problematic, as clinical settings require technologies that have been proven to function safely and effectively under diverse conditions. For example, diagnostic AI systems, particularly those used for imaging or predictive diagnostics, must be able to detect diseases accurately across various populations, disease severities, and imaging modalities. Without these validations, these systems are highly susceptible to failure in practical scenarios, leading to misdiagnoses and compromising patient care.

In contrast, successful AI systems in healthcare, such as the Israeli AIDOC (www.aidoc.com ), which provides several solutions for hospital management, financial, clinical and operations, serve as exemplary models of peer-reviewed validation. One of their products, can detect intracranial hemorrhages had to undergo rigorous clinical testing before receiving FDA clearance. Through multiple clinical trials, AIDOC’s technology demonstrated high sensitivity and specificity of the crucial endpoints of the study. These trials were followed by peer-reviewed studies, further solidifying AIDOC’s credibility and efficacy in clinical use[18].

Healthcare providers all over the world, need to be assured that the AI tools they rely on, are backed by strong statistical methods and have go through rigorous peer review to confirm their safety and accuracy. Ultimately, AI systems that lack this essential peer-reviewed validation are more likely to fail in real-world healthcare applications, leading to potential harm to patients and a loss of trust in AI technologies. Therefore, rigorous peer-reviewed validation is non-negotiable for the successful integration of AI into healthcare practice.

?

Black-Box models with no explainability

One of the biggest challenges with AI in healthcare is the reliance on “black-box” models (algorithms whose decision-making processes are opaque to users), due to the fact that their internal workings are undisclosed and only the inputs and outputs are known.

?

A true AI system, especially in healthcare, should offer a degree of explainability, allowing physicians to understand the rationale behind a diagnosis or recommendation. ?Medical AI systems have three features:

a)?????? The capacity for self-learning: Deep Learning systems can deal with a large amount of data and develop the capacity for self-learning. These algorithms enable them to produce the desired output based on the input data[19].

b)????? High predictive accuracy in diagnosis: Medical AI systems require large amounts of data to develop their capacity for self-learning and this data is separated into training and testing. After being trained, medical AI systems show extremely high accuracy in testing. They even surpassed human experts in identifying diseases[20].

c)?????? The diagnoses and treatment suggestions provided by medical AI systems are unexplainable: Medical AI systems are “Black Box”, that indicates people do not understand the internal working mechanism of such systems. Medical AI systems can (for example) predict tumor response to a particular drug based on allelic patterns among thousands of genes or predict lung cancer prognosis by analyzing microscopic images all without understanding or identifying why or how those patterns matter[21].

?

Collaborative AI Development

In the complex evolution of AI in healthcare, the collaboration between physicians and developers is crucial for creating effective, clinically relevant tools. Physicians bring essential clinical expertise that bridges the gap between theoretical AI models and practical, real-world healthcare applications and play a pivotal role throughout the AI development lifecycle.

Doctors bring to the project the medical knowledge necessary to validate the unmet medical need, and systems engineers have the challenge of translating the need into functional algorithms without bias.? The great challenge is to ensure that both can understand each other in the complexity of a complex project.? Usually both professionals tend to speak a very specific technical language, and it is difficult to take doctors out of their clinical world to include it in an office full of engineers.

Both need to understand the disease pathology, patient care processes, and medical workflows ensures that AI tools are designed to meet real clinical needs rather than theoretical problems. This input is crucial in defining appropriate use cases, ensuring that AI systems address clinically significant issues such as early diagnosis, treatment recommendations, or workflow optimization[22].

Moreover, healthcare professionals are essential during the data collection and curation phases. As mentioned before, AI models depend on high-quality, representative datasets to function effectively. Physicians help ensure that the data used for training AI systems is both comprehensive and clinically relevant, reducing the risk of bias or inaccuracies. Their expertise is also vital in the interpretation of AI outputs, helping developers fine-tune algorithms to provide actionable insights rather than vague or ambiguous predictions.

Physicians can improve AI software by providing Feedback on Clinical Validity.? Doctors can test AI prototypes in clinical settings, offering feedback on performance, accuracy, and usability. This iterative feedback loop allows developers to refine algorithms to align more closely with real-world healthcare scenarios.

Also, doctors can shape decision support systems offering insights for the software to work properly without overwhelming unnecessary alerts or excessive and unnecessary data in the output report.

This contribution ensure that AI applications adhere to ethical standards, particularly regarding patient privacy, data security, and bias mitigation. They help address potential ethical challenges that may arise from AI-driven decisions, ensuring that patient safety remains a priority[23].?

Collaboration between physicians and developers results in AI systems that are not only technically advanced but also clinically useful, safe, and effective. Their input shapes AI technologies that fit seamlessly into medical practice, enhancing patient care.

?

?

Case Studies of Successful AI Implementation

One of the things I enjoy most about teaching on the topic of AI in medicine to my medical peers, is demonstrating the most interesting cases of medical devices that have changed the lives of doctors and patients on this topic.? These cases exemplify how AI can be applied responsibly and effectively in clinical environments, providing tangible benefits to both healthcare providers and patients.? I must clarify that there are many more interesting cases, and that this article does not have any type of sponsorship.

?

1. Radiology: TEMPUS Imaging Analysis

Tempus offers a variety of programs from AI clinical data assistant to sequencing, biology modeling, clinical trial enrollment.? Tempus was founded in August by Eric Lefkofsky, after his wife was diagnosed with Breast Cancer. Shortly after he founded the company to create a collaboration between artificial intelligence and medicine to research about cancer patients. He convinced Ryan Fukushima to join as the company’s first employee. Ryan and Eric began assembling a world class team, focused on building the first version of a platform capable of ingesting real time healthcare data to personalize diagnostics[24].

TEMPUS Radiology (Former Arterys) is known for integrating AI in imaging, particularly for analyzing cardiovascular MRIs. Its FDA-approved system automates the process of measuring blood flow in cardiac scans, a task that typically takes hours for radiologists. TEMPUSS′ AI performs this task in just a few minutes with the same level of accuracy as a trained radiologist. The tool's speed and precision have proven valuable, especially in overburdened radiology departments where time is of the essence.

Tempus has transformed drug screening by utilizing advanced AI-driven imaging technologies that provide detailed, non-destructive insights from patient-derived organoids. These non-invasive techniques enable the continuous study of the same organoids across multiple assays, preserving their viability for longitudinal analysis and multiplexed testing, as highlighted in this case study[25].

According to Tempus AI's imaging capabilities have significantly advanced the field of drug screening, offering a non-toxic, scalable, and precise method for evaluating candidate therapies. By leveraging the power of AI, Tempus AI drive innovation, delivering critical insights that impact drug discovery and precision medicine.


2. Drug Discovery: BenevolentAI and Insilico Medicine

AI is also making significant strides in drug discovery, where it is used to sift through massive datasets, identify potential drug candidates, and predict how these compounds might interact with biological systems.

Benevolent is expanding collaborations to validate new drug by using end-to-end drug discovery offerings.

?

Figure. 1 End to endo Drug discovery offerings[26]

This company has worked cases with Astra Zeneca and Merk AG to help them unlock biological insights and tackle complex therapeutic challenges. You can se the specifics of ach case in the page https://www.benevolent.com

3.?????? Hospital Optimization. ?The third example I want to show you is Qventus, one company working in the enhancement of operational efficiency for hospitals. Qventus ??is a prime example of an AI tool designed to optimize clinical workflows by predicting and preventing bottlenecks in patient care. By analyzing historical data, Qventus predicts when emergency rooms or intensive care units are likely to experience a surge in patient volume and provides recommendations on how to allocate resources efficiently.? Some of the clients of this software are Saint Luke′s Hospital, Boston Medical Center, Jackson Health System, UVA Health.

?

Their products bring hospital environment solutions for Inpatient, resource optimization, and client perspective and their webpage indicates that after using their services, the ROI of clients increased between 6X and 27X.? We all know that operational inefficiencies create massive problems in healthcare.? From unnecessary extended hospitalization length resulting in excess days, rising in millions of dollars annually because of that behavior. Their inpatient software was created to hardwire discharge planning best practices. It uses AI, ML, and behavioral sciences to reduce variability in hospital processes.?? The solution can predict discharge days for each patient eliminating unnecessary barriers that prevent discharge goals.? One of the final processes is the presentation of statistical data for decision-makers to analyze the patient flow and create better and more efficient ways to treat patients in the hospital[27].

?

Ethical Considerations and AI Governance

As AI becomes more integrated into healthcare, ethical considerations must be at the forefront of its development and deployment. The medical field is bound by strict ethical standards that prioritize patient safety, confidentiality, and equitable access to care. The same standards must apply to AI technologies, which, if not properly governed, could exacerbate existing inequities and pose new ethical dilemmas.

Data Privacy and Security

One of the most pressing ethical concerns in AI is the protection of patient data. AI systems often require large amounts of data to function effectively, and in healthcare, this data includes highly sensitive personal health information (PHI). Ensuring that AI systems comply with data privacy regulations, such as HIPAA in the United States and GDPR in Europe, is crucial. Attackers may exploit weaknesses in asset management processes to gain unauthorized access to sensitive AI-related assets, such as models or datasets, leading to data breaches, intellectual property theft, or compromise of system integrity (Rodrigues, 2020).[28].

One solution could be the use of de-identified or anonymized data is one potential solution to safeguard patient privacy, but it is not without its challenges. Even de-identified data can sometimes be re-identified if enough secondary data is available.

More data governance is needs specially in major LATAM research centers in Mexico, Colombia, Argentina and Costa Rica to ensure the integrity, security, and accessibility of data throughout its lifecycle. In the context of clinical research, data governance is essential for ensuring that data is collected, managed, and analyzed in accordance with regulatory requirements, ethical standards, and best practices. Without effective data governance, clinical research data may be compromised by errors, inconsistencies, or breaches, leading to delays, cost overruns, and even patient harm.

?

Accountability and Transparency

As AI systems become increasingly integrated into healthcare, establishing clear accountability and transparency standards is essential to maintain trust and ensure patient safety. Accountability ensures that all stakeholders—developers, clinicians, healthcare organizations, and regulators—understand and take responsibility for their roles in AI development, deployment, and outcomes. Transparency, meanwhile, facilitates understanding of AI systems by making them interpretable and accessible to those who use them, from physicians to patients.

One of the central issues in AI accountability is clarifying who is responsible for AI-driven outcomes, especially when these systems assist in diagnosis, treatment planning, or clinical decision-making[29]. Responsibilities should be clearly distributed among AI developers, healthcare providers, and the institutions implementing the technology. For example, developers must ensure that AI algorithms are thoroughly tested and validated before deployment, and healthcare providers need to understand the system’s capabilities and limitations to use them safely.

Institutions, on the other hand, hold the responsibility of providing proper training and ensuring ethical use within clinical workflows. For example, in cases of diagnostic error, it is crucial to determine whether the AI system’s training data or clinical interpretation was at fault[30]. Clear guidelines are necessary to allocate responsibility appropriately, as AI accountability without structure may disincentivize proper usage, stifle innovation, or misplace blame.

The accountability of AI systems does not end with deployment; continuous monitoring is essential for identifying and addressing performance issues that may arise in clinical use. This ongoing assessment helps to detect any drift from expected performance, often due to changing patient demographics or evolving clinical practices and ensure the model’s accuracy remains aligned with clinical standards. To achieve this, healthcare organizations and developers need to collaborate on mechanisms for performance tracking, error reporting, and updating algorithms when necessary.

According to an article in the California Review Management, there should be an established process to make sure that AI satisfies certain standards.? They create very interesting analogy.? If we ask companies to pass a car safety inspection test before being driven on public streets, we need AI companies to create filters or layers of security steps to prevent harmful results. This way, all involved know exactly where things stand concerning regulations or what must be addressed further[31].

Also, transparency in data handling also involves clearly explaining to patients when and how AI is being used in their care[32]. For example, if an AI tool is involved in their diagnosis or treatment planning, patients should be informed and assured of the steps taken to protect their privacy. This builds trust in AI technologies and aligns with ethical obligations to keep patients informed about the systems impacting their care.? For this purpose, an informed consent document must be created for the patient informing that they allow AI elements to be used in the treatment of their condition. Developers and healthcare institutions can achieve greater accountability by producing thorough documentation on model limitations, including areas where the system may have lower accuracy or increased risk of error.

?

The concept of transparency must include patients and healthcare providers to know about the limitations. There will be a primordial concept for users, “No AI tool can replace a clinician's judgment”; AI systems are meant to augment human expertise rather than replace it. A transparent discussion must be done with hospital administrators about limitations ensures that clinicians understand when to rely on AI outputs and when to apply caution, especially in cases where algorithms may underperform, such as with non-representative patient populations or rare medical conditions.

This approach promotes realistic expectations about the AI’s role and prevents overreliance on the technology. By transparently setting boundaries around AI applications, developers and healthcare providers can collaboratively foster trust, safety, and efficacy in AI-driven healthcare.

?

Conclusion: The Need for Rigorous Standards and Validation

As AI continues to evolve and integrate into healthcare, the risks associated with false or misleading AI systems must be addressed with urgency. Ensuring that AI tools are subject to rigorous validation, transparent development processes, and continuous monitoring is critical for patient safety and trust in AI technologies.

Regulatory bodies, such as the FDA and EMA, play a key role in setting standards for AI in healthcare. However, these organizations must evolve to keep pace with the unique challenges posed by AI-driven tools, such as algorithm transparency, data bias, and the continuous learning nature of AI systems.

Accountability and transparency are fundamental to the ethical and effective use of AI in healthcare. By distributing responsibility across stakeholders, enhancing explainability, implementing continuous monitoring, and protecting patient privacy, healthcare AI can meet the high standards necessary for safe clinical application. Ultimately, a structured approach to accountability and transparency not only protects patients but also strengthens the credibility and reliability of AI technologies in healthcare, ensuring that these tools serve as true allies in improving patient outcomes and healthcare efficiency.

Ultimately, the healthcare community must foster a culture of skepticism and rigorous evaluation when adopting new AI technologies. By prioritizing safety, ethics, and transparency, AI can achieve its full potential as a transformative force in healthcare—delivering more accurate diagnoses, better treatments, and improved patient outcomes.


[1] The Nobel Prize Webpage:? https://www.nobelprize.org/prizes/physics/2024/press-release/

[2] "Greek Medicine – The Hippocratic Oath". www.nlm.nih.gov . National Library of Medicine – NIH. Retrieved 29 July 2020

[3] Xsolis. (2023, November 13). The evolution of AI in healthcare - Xsolis. Xsolis. ttps://www.xsolis.com/blog/the-evolution-of-ai-in-healthcare/

?

[4] Copeland, B. (2024, October 14). Artificial intelligence (AI) | Definition, Examples, Types, Applications, Companies, & Facts. Encyclopedia Britannica. https://www.britannica.com/technology/artificial-intelligence

[5] Autonomous Artificial Intelligence Guide: The future of AI. (n.d.). https://www.algotive.ai/blog/autonomous-artificial-intelligence-guide-the-future-of-ai

[6] Delua, J. (2024, August 23). Supervised vs Unsupervised Learning. IBM. https://www.ibm.com/think/topics/supervised-vs-unsupervised-learning

[7] Dymling, S. (2024, April 4). GL_blog: The risks of poor data quality in AI systems. twoday. Retrieved October 10, 2024, from https://www.twoday.com/blog/the-risks-of-poor-data-quality-in-ai-systems

[8] Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453.

[9] Adva Med.? Artificial Intelligence in Medical Technology Myths vs. Facts. Avaliable at: https://www.advamed.org/wp-content/uploads/2024/02/AI-Myths-vs.-Facts.pdf

[10] U.S. Department of Health and Human Services. Food and Drug Administration [FDA]. (2024, March 15). Artificial intelligence & medical products. FDA. Retrieved October 15, 2024, from https://www.fda.gov/media/177030/download?attachment

[11] Yampolskiy, R.V. On monitorability of AI. AI Ethics (2024). https://doi.org/10.1007/s43681-024-00420-x

[12] "AI washing explained: Everything you need to know". TechTarget. 29 February 2024. Retrieved 5 June 2024.

[13] Understanding Artificial Intelligence: Real vs. Fake AI – Qymatix Predictive Sales Software. (n.d.). https://qymatix.de/en/artificial-intelligence-real-vs-fake-ai/#:~:text=A%20prominent%20example%20of%20%E2%80%9Cfake,independently%20or%20solve%20complex%20problems .

[14] Hassan, S., MD. (2023, September 24). Analyzing the downfall: Babylon Health AI Chatbot’s journey to bankruptcy. https://www.dhirubhai.net/pulse/analyzing-downfall-babylon-health-ai-chatbots-journey-hassan-md/

[15] BBC News. (2018, June 27). GP at hand: 'Chatbot gave wrong diagnosis'. BBC News. https://www.bbc.com/news/technology-44635134

[16] Visibelli, A., Roncaglia, B., Spiga, O., & Santucci, A. (2023). The impact of artificial intelligence in the odyssey of rare Diseases. Biomedicines, 11(3), 887. https://doi.org/10.3390/biomedicines11030887

[17] al, S. (2021). A paradigm shift in research: exploring the intersection of artificial intelligence and research methodology.?International Journal of Innovative Research in Engineering & Multidisciplinary Physical Sciences,?11(3).?https://doi.org/10.37082/ijirmps.v11.i3.230125

[18] Healthcare AI, Aidoc Always-on AI. (2024, July 10). Healthcare AI - Healthcare AI | AIDOC Always-On AI. https://www.aidoc.com/healthcare-ai/

?

Copy to clipboard

[19] Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. 2019 Jun;6(2):94-98. doi: 10.7861/futurehosp.6-2-94. PMID: 31363513; PMCID: PMC6616181.

[20] R Miotto, L Li, JT. Dudley. Deep learning to predict patient future diseases from the electronic health records. Proceedings of 38th European Conference on Information Retrieval Research, Padua; Italy, ECIR (2016), 10.1007/978-3-319-30671-1_66

[21] DS Watson, J Krutzinna, IN Bruce, et al. Clinical applications of machine learning algorithms: beyond the black box BMJ, 364 (2019), p. l886, 10.1136/bmj.l886

[22] Stebbing, J., Phelan, A., Griffin, I., Tucker, C., Oechsle, O., Smith, D., & Richardson, P. (2020). COVID-19: combining antiviral and anti-inflammatory treatments. The Lancet Infectious Diseases, 20(4), 400-402.

[23] Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453.

[24] Tempus. (2023, July 12).?Our History - tempus.?https://www.tempus.com/about-us/our-history/

[25] Mastela, C. (2024, October 11). Revolutionizing drug screening with Tempus AI’s advanced imaging capabilities. Tempus. https://www.tempus.com/resources/content/case-studies/revolutionizing-drug-screening-with-tempus-ais-advanced-imaging-capabilities/?aliId=eyJpIjoieEE3RGtaQzV3bGk3T0M3bCIsInQiOiJ5VUZrY0NKNkwwQVJyXC83RHpLMTNzQT09In0%253D

[26] End-to-End drug discovery. (n.d.). BenevolentAI (AMS: AI).?https://www.benevolent.com/benevolent-platform/end-end-drug-discovery/

[27] Qventus, Inc. | Automate Your Patient Flow. (n.d.). Qventus, Inc. https://qventus.com/

[28] Rodrigues, R. (2020) ?Legal and human rights issues of AI: Gaps, challenges and vulnerabilitiesí, Journal of Responsible Technology, 4, p. 100005. Available at: https://doi.org/10.1016/j.jrt.2020.100005 .

[29] Lechterman, Theodore M. (Forthcoming). The concept of accountability in ai ethics and

governance. In J. Bullock, Y.C. Chen, J. Himmelreich, V. Hudson, A. Korinek, M. Young,

and B. Zhang (Eds.), The Oxford handbook of AI governance. Oxford University Press

[30]? Artificial intelligence, bias and clinical safety. ?Challen R, Denny J, Pitt M, et al. BMJ Qual Saf. 2019;28(3):231-237.

[31] Collina, L., Sayyadi, M., & Provitera, M. (2023, November 6). Critical issues about A.I. accountability answered. California Management Review. https://cmr.berkeley.edu/2023/11/critical-issues-about-a-i-accountability-answered/

[32] Renal and Urology News. (2024, May 7).?Patients should be informed how artificial intelligence is used in their care - Renal and urology news.?https://www.renalandurologynews.com/features/patients-should-be-informed-how-artificial-intelligence-is-used-in-their-care/

要查看或添加评论,请登录

社区洞察

其他会员也浏览了