The Promise and Peril of AI: Charting a Course Through Healthcare's Wild West

The Promise and Peril of AI: Charting a Course Through Healthcare's Wild West

The Hong Kong University of Science and Technology (HKUST) has unveiled four AI models aimed at supporting care for 30 different types of cancers and diseases, utilizing the university's advanced AI supercomputing facility.

The four new AI-driven models include MOME, a breast cancer diagnostic tool designed to distinguish malignant from benign breast lumps using MRI scans. The University claims that the AI can achieve accuracy levels comparable to radiologists with five or more years of experience. It was created with the intent of providing an alternative to the more intrusive diagnostic method of biopsies and can predict a patient’s response to neoadjuvant chemotherapy.

The second new tool is the mSTAR Pathology Assistant Tool, designed to assist pathologists by streamlining or automating up to 40 diagnostic and prognostic tasks, thereby reducing their workload. The system works by directly modeling whole slide images and augmenting them with multimodal knowledge, a concept that involves combining different types of data and learning styles to create a more comprehensive understanding.

AI is revolutionizing healthcare. It not only analyzes and acts on big data at unprecedented speed and accuracy but also leverages machine learning to continuously learn, adapt, and enhance its performance.

The rapid pace of AI development has led many healthcare systems and organizations to consider incorporating it into their work. The National Health Service(NHS) is also exploring the use of AI to improve workflow and diagnosis.

Sir John Bell, a senior government advisor on life sciences and president of the Ellison Institute of Technology in Oxford, stated that allowing AI access to all data within safe and secure research environments would improve the representativeness, accuracy, and equality of AI tools. This, he argued, would benefit all segments of society, reduce the financial and economic burden of running a world-leading National Health Service, and ultimately lead to a healthier nation.

The challenge lies in regulating AI for safe deployment. Artificial Intelligence is a somewhat wild west technology that inspires both hope and fear. It could either make our lives easier or potentially lead to our demise, similar to how nuclear energy was once perceived.

One potential pathway to regulating healthcare AI models is through the Software as a Medical Device (SaMD) route. The FDAs efforts to regulate in this area have resulted in its Artificial Intelligence/Machine Learning (AI-ML)-Based Software as a Medical Device (SaMD) Action Plan, published in January 2021.

Key points from the Action Plan:

  • Total Product Lifecycle Approach: The FDA recognizes the unique characteristics of AI/ML-based SaMD, which can learn and adapt over time. This approach emphasizes the importance of ongoing monitoring and evaluation throughout the device's lifecycle.
  • Premarket Review: The FDA will continue to use premarket review processes to assess the safety and effectiveness of AI/ML-based SaMD. However, the agency acknowledges the need for a more flexible approach to accommodate the evolving nature of these technologies.
  • Real-World Performance Monitoring: The FDA encourages the use of real-world data to monitor the performance of AI/ML-based SaMD. This data can be used to identify potential issues and inform future regulatory decisions.
  • Transparency and Communication: The FDA emphasizes the importance of transparency and communication between manufacturers, regulators, and healthcare providers. This includes clear labeling, user instructions, and regular updates on the performance of AI/ML-based SaMD.

The great number of Artificial Intelligence(A.I) models are already outperforming humans in breast cancer detection and reduction in false-positive diagnoses.

Artificial Intelligence(A.I) models are being tested in dermatology, radiology, surgery, disease diagnosis, pharmacy and even psychiatry, where chatbots are being developed to automatically diagnose conditions such as anxiety and depression.

AI is also likely to drive emerging fields of healthcare, such as personalized medicine, where it can be used to create tailored treatments based on an individual patient's DNA.

The challenge with regulating AI/Machine Learning lies in its capacity for evolution. Machine learning algorithms can be designed to learn from data, adapt to new information, and improve their performance over time. Without careful oversight, this could lead to unintended consequences as AI systems become increasingly autonomous.

‘locked versus adaptive’ AI

It's also important to highlight the distinction between locked and adaptive AI. Locked AI operates on a fixed set of rules and algorithms, making its behavior predictable. It has limited learning capacity and requires manual updates. Most systems we use daily, such as iPhone software, are examples of locked AI. To improve an iPhone's capabilities, it needs software updates.

Adaptive AI systems, on the other hand, can learn and evolve over time, adapting to new information and circumstances. They are flexible, dynamic, and capable of continuous learning, enabling them to self-improve. Adaptive AI systems are well-suited for real-world scenarios.

Predetermined change control plan

Regulators are looking into regulating AI based on a predetermined change control plan. This plan outlines a set of rules and guidelines that dictate how an AI system should develop and behave. The basic idea is that as long as the AI continues to develop in the manner predicted by the manufacturer it will remain compliant. Only if it deviates from that path will it need re-authorization.

The European Commission’s (EC) proposed Artificial Intelligence Act.

Key Provisions of the AI Act

The AI Act categorizes AI systems into different risk levels and imposes specific requirements based on their potential impact:

  1. Unacceptable Risk: AI systems that are considered a clear threat to people's safety or fundamental rights are banned. This includes applications like social scoring systems similar to those used in China.
  2. High-Risk AI: AI systems that pose significant risks to people's safety or fundamental rights are subject to strict regulations. Examples include AI used in critical infrastructure, education, and employment. These systems must undergo rigorous risk assessments, be designed to be robust and secure, and adhere to transparency and accountability principles.
  3. Limited-Risk AI: AI systems with a lower risk profile, such as chatbots or spam filters, are subject to less stringent requirements. However, they must still comply with transparency obligations, ensuring users are aware they are interacting with an AI system.
  4. Minimal-Risk AI: AI systems with minimal risk, such as simple video games, are largely exempt from regulation.

The black box challenge.

The "black box challenge" in AI refers to the difficulty in understanding how AI systems, particularly complex ones like deep neural networks, arrive at their decisions. These systems often operate like a black box, taking in input data and producing output, but the internal processes and reasoning behind the output remain opaque.

This lack of transparency can be problematic for several reasons:

  • Trust and Accountability: If we don't understand how an AI system makes decisions, it's difficult to trust its outputs, especially in critical applications like healthcare or autonomous vehicles.
  • Debugging and Improvement: If an AI system makes a mistake, it's challenging to identify the root cause and make necessary improvements.
  • Ethical Considerations: Black box models can perpetuate biases present in the training data, leading to unfair or discriminatory outcomes.

To address the black box challenge, researchers are working on techniques like:

  • Explainable AI (XAI): Developing methods to make AI models more interpretable, such as visualizing the decision-making process or providing simpler explanations.
  • Model Simplification: Creating simpler models that are easier to understand, although this might come at the cost of accuracy.
  • Post-hoc Interpretation: Analyzing the trained model to understand its features and how they contribute to the output.

Many challenges are associated with AI. Some have suggested that many of these challenges can be addressed by regulating training data, similar to the approach used in clinical trials. However, it's important to note that clinical data itself can introduce biases.


Need funding for your practice? Use the link below to apply! https://lnkd.in/d3ADcQxD

Contact us now to explore our range of medical equipment tailored to your needs. Your patients deserve the best – let's make it happen together.

Contact www.lepekemedical.com

Get the answer you need today! Home paternity test is the simplest and fastest way to confirm whether an alleged father is the biological father of a child. This reliable paternity test results are 99.99% accurate and 100% confidential. use the link:

https://www.easydna.co.za/?afl=lepekemedical137





要查看或添加评论,请登录

Lepeke Mogashoa的更多文章

社区洞察

其他会员也浏览了