Healthcare Innovation - Evaluation & Deployment

Healthcare Innovation - Evaluation & Deployment

Evaluating Healthcare AI solutions

Recently, there has been a tremendous increase in the number of AI applications in Healthcare, making a 4X increase over the last decade. Healthcare providers have begun to adopt AI solutions at a massive level, which makes it imperative to establish the methodologies for evaluating AI solutions. It is important to understand the various evaluation frameworks that help distinguish between good & bad AI applications.

Good v/s Bad AI models

It has been observed that most AI based models are not deployable as it is in a clinical setting, as they fail to produce a useful outcome for the healthcare stakeholders. We know about model evaluation metrics including precision, recall, model accuracy etc. These metrics do an excellent job of measuring the ability of a model in classifying or predicting data but does that mean that it is a good AI solution from a healthcare perspective?

We’ve to think beyond the model evaluation metrics because these metrics do not subscribe to the impact produced in a healthcare setting by the AI solution. An AI model can be of clinical utility only by its ability to mitigate the situation. Thus, it is important that a model’s output specifically relates to the actionable and the stakeholder responsible for the same. 

For example, most healthcare providers have an estimated guess of the probability of a patient getting readmitted so a model that predicts the instances of patient readmission post-discharge may not be useful if it does not convey anything that the team does not already know. On the contrary, if the model provides insights into the actionable steps which the relevant stakeholder could do during the in-patient stay in the hospital, then it is definitely a good utility of the AI model.

Impact Evaluation of AI solutions

Moving beyond model metrics, the key questions that need to be addressed while evaluating the clinical utility of an AI solution are as follows:

What is the quality & quantity of the dataset available to train the model for the AI solution? 

An AI solution is as good as the data used to train the AI model. If there is a bias in the dataset or there are data integrity issues, then the same will be reflected in the model as well.

Is the outcome of the AI solution connected to an action item that is feasible to be implemented in the healthcare setting? 

The outcome of the AI model, if not mapped to an action item, will be not of any clinical utility. For example, if a solution predicts the occurrence of a particular type of cancer while the current oncology streams do not provide an intervention to treat the same, then the solution has no practical utility.

What is the reaction time offered by the AI solution to the stakeholders for the corrective action? 

The reaction time available to the healthcare providers determines the efficacy of the solution. For example, if a solution predicts the occurrence of cardiac arrest 2mins before the event, then it doesn't give sufficient time to act on the prediction vis-a-vis a prediction time of 2 hours, where the life of the patient can be saved by taking preemptive action.

What are the resources needed to take the action

The cost of implementation of the action associated with the outcome of the AI solution decides the feasibility of the solution. If a setting lacks the manpower, equipment or processes to take the corrective action, then the purpose of the AI solution is defeated.

What is the incentive of the mitigating action?

The mitigating action should be of direct utility to the stakeholder and must result in a favourable impact such as saving patients’ life, an improvement in the patient experience, bringing in operational efficiency, reducing the cost of healthcare delivery, reducing the burden on the healthcare providers etc.

What are the regulatory aspects related to data protection, solution efficacy as medical-grade etc?

The adoption of an AI solution in a Healthcare setting depends on whether the solution conforms to the relevant regulatory requirements. Additionally, the solution has to comply with the Liability constraints, Data security & Patient Information security. Further, while the blackbox model of A solution is also deployed in Healthcare settings but it faces resistance from the Healthcare providers, if it can’t be explained how the solution arrives at a particular outcome.

Clinical Evaluation of AI solutions

The International Medical Device Regulators Forum (IMDRF) has developed a framework for clinical evaluation of AI solutions which is followed by Global regulators, including the US Food and Drug Administration (US FDA). The framework allows assessing the risk & impact of an AI solution, defined as SaMD.

Software as a Medical Device (SaMD) is defined by the International Medical Device Regulators Forum (IMDRF) as

software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device.

It is important to note that SaMD is not a part of a Hardware device and the purpose of the SaMD must be to treat, screen, diagnose, monitor, mitigate, prevent and/or predict disease. All the AI solutions are treated like a SaMD and the corresponding evaluation frameworks are applicable. The established framework for clinical evaluation has three parts:

No alt text provided for this image

Regulatory Environment for AI in Healthcare

Complying with the Regulations is a time consuming & tedious process, yet it is important to clinically ensure the efficacy & safety of the AI solution. FDA’s regulatory framework starts with a market application which consists of a definition statement and risk category of AI solution. The definition statement must clearly specify the purpose of the AI solution, the intended target population as well as the intended users of the solution. Further, the solution provider needs to establish te risk category as per the table given below.

IMDRF's SaMD Risk Categories

Depending on the risk category, the regulations mandate a pre-market notification or the 510(k), De novo notification, or a Pre Market Approval (PMA).

  • 510(k): If an AI solution is similar to an existing solution, called the predicate, in terms of the intended use case & underlying technology, then the solution provider may submit a pre-market notification or the 510(k). As more & more solutions get approved, 510(k) will become an easy & efficient wat to get the regulator’s approval.
  • De novo notification: If there's no predicate, then the solution provider may submit a De Novo notification. De Novo notifications are limited to Risk Category I and Category II solutions. The De Novo notification should include clinical data, bench performance testing data and a description of the benefits of the AI solution.
  • Pre Market Approval (PMA): For risk category III & IV solutions, a Pre Market Approval (PMA) is required. PMAs are the most stringent regulatory applications. PMA includes rigorous technical studies, non-clinical laboratory studies, laboratory studies, and clinical investigations. 

The evaluation & regulatory framework discussed in this article may be employed by the Healthcare stakeholders to create custom criteria for evaluating DeepTech AI solutions, as per the deployment setting, the target stakeholders and the intended utility of the solution.

Healthcare Innovation Challenge

At NASSCOM CoE’s Healthcare vertical, Lifesciences & Healthcare Innovation Forum (LHIF), we work with various Corporates, Hospitals & Pharma Companies to enable the implementation of Co-Innovation Projects requiring the co-development & deployment of AI solutions.

We are organizing Healthcare Innovation Challenge (HIC), which is focussed on creating a competitive edge & operational excellence for Hospitals by enabling Collaborative & Frugal Innovation. Healthcare Innovation Challenge is meant for Hospitals so that they may nominate Use Cases relevant to them, which will be followed by curation, evaluation & deployment of Technology led Innovative solutions that address the nominated Use Cases.

HIC allows the partnering Hospitals to:

  • Discover best HealthTech solutions of India for the Use Cases nominated by the Hospitals
  • Deploy & Own the solutions at a low cost
  • Get implementation support for deploying the Winning Solution for each Use Case
  • Get Branding & Media coverage for the Hospitals

HIC which will be launched at the 9th edition of LHIF Virtual Conference on 18th December. On that day, the application will be made live for startups to register for the use-cases nominated by the partnering Hospitals. The timeline for the challenge is given below:

No alt text provided for this image

The challenge is going to get adequate media coverage and publicity for the participating hospitals, association partners and the startups who are selected through this challenge.

The indicative use cases for this edition of HIC is as given below. While the Hospitals are encouraged to nominate the Use cases relevant to them.

HIC Use cases for Hospitals

Currently, we are inviting private hospitals to nominate their use cases and join the HIC as Hospital Partner. For more details, on the Healthcare Innovation Challenge, please reach out to [email protected].

Raghuram Janapareddy

Partner & Managing Director - India @ Tenthpin | Innovation in Lifesciences

4 年

With some of the big names in the hospital industry already signing up as partners, I am sure this will be the game changer in Healthtech Adoption

要查看或添加评论,请登录

Shantanu Gaur的更多文章

社区洞察

其他会员也浏览了