Clinical Evidence Generation Framework
by James Green

Clinical Evidence Generation Framework

For Machine Learning Models

As we generate models and roll them out into production at our partner institutions, we’ve had to develop a template - a clinical evidence generation framework and methodology to make sure that building models is a repeatable and predictable process. It’s been an amazing learning experience and one that we’d like to share.

The first big concept is that predictive models don’t improve outcomes, analytically driven interventions do! And they always have.??

To improve outcome {X}

We need intervention {Y},

And intervention {Y} may need Model {Z}

{X}

Let’s go through each of these variables one-by-one.? First {X} or Problem Formulation:

  • What is the problem that needs to be solved?
  • What are the objective measures or KPIs associated with the problem?

To figure this out, we need to focus on selecting highest priority metrics and indicators to be measured. Things like: impact, success, risk or harm. And each of these needs to be able to be monitored for continuous improvement and learning over ? time. Here are four steps that will help you with Problem Formulation:

  1. Describe the problem in terms of one or more of the following: measurable outcomes, quality or operational metrics, patient and/or clinician experiences, or efficiency of operational processes
  2. Identify any new value-generating opportunities like saving money, or generating more revenue, or offering a new business service, or simply? learning.?
  3. Identify regulatory and compliance standards as well as community/professional best practices. This may include reporting obligations that will be impacted and need to be addressed.
  4. Identify collaboration opportunities such as data sharing, networking, research/grants and of course publication opportunities.

These four Problem Formulation steps may include any of the following components:

  • Clinical Outcomes? such as mortality due to sepsis or a pre operative medical optimization
  • Operational Metrics such as Length of Stay
  • Patient Experience such as post operative complications or is there a risk of malpractice!
  • Provider Experience: reducing burnout is always good
  • Your financial and operational team will love you if you can show a Return on Investment. This may include PACU utilization, OR sequencing, OR some other optimization
  • Adjacent to ROI are simple Savings and efficiencies. We built a model that predicts the risk of a patient canceling on the same day as their appointment that saved $500,000 in the first six months of use.
  • You may even be able to generate new capabilities or service lines as a result of your model.?

{Y}

That is {X} - Problem Formulation. Once we’ve figured that out, we can analyze workflows and think through {Y}: Solution Formulation. This includes things like:

  • Process change and interventions?
  • Communication and information sharing (decision support)
  • Delivery, monitoring, and continued improvement?

In order to conceptualize a solution and proposed interventions or workflow changes we need to describe the change in behavior, workflows, or processes as much as possible. And we need to ensure logical and operational linkage between all proposed interventions with at least 1 of the outcomes/metrics that are being measured in {X} our Problem Formulation - and vice versa.?

Here are four categories that the solution formulation may include:

  1. Interdisciplinary interventions: Deploying team science and implementation science approaches to implement new workflows, processes, care paths, and/or protocols
  2. Decision Support tools: This must include the information, content, communications, evidence, training, and data that is needed to support the interventions. You can use the Five Rights of Clinical Decision Support to describe information needs: (1) Right information, (2) Right time, (3) Right location, (4) Right person, (5) Right format
  3. Consistent Delivery: How to ensure that the interventions are uniformly and consistently delivered enterprise-wide
  4. Prevention of Harm: This may be the most important. There are many ways that a model could introduce additional risk or harm. This includes implicit bias, discrimination, exclusion or simple clinical error.

{Z}

Once we have both our Problem {X} and Solution {Y} Formulated, then and only then can we allow ourselves to ask what kind of machine learning algorithmic model might be helpful - that is {Z}. The model could fall into any of these categories:?

  • Prediction problems
  • Classification problems
  • Timeliness problems
  • Integration to workflow

We can use the following guiding questions to focus on whether a model is required. If the answer to any of these is “yes” then there is a high likelihood that some kind of algorithmic model will help improve the outcome.

  1. Do we need to predict future events or outcomes? For example:?
  2. Do we need to classify patients or events to classes such as high/low risk?
  3. Do we need to provide a personalized or context appropriate recommendation?
  4. Does the prediction need to be made at a particular time? Such as:

With this framework we have been able to figure out which models to build and with whom to build them. Of course once this is done, you still need to look at model validation, trustworthiness, the communication plan and regulatory compliance. In an effort to keep things short we will cover these topics in Part 2 of this blog.?

Meanwhile, if you’d like to learn more about Cognome’s currently available models and the TUNER BI platform visit our models page https://cognome.com/ai/ml-models.

Manuel Wahle

Development Architect

2 周

It was about time that someone came up with a framework that rationalizes ML in healthcare. This helps to understand where and how ML models can help, and to assess the value of deployed models. It kind of puts healthcare in the center, instead of pushing the deployment of solutions “just because”.

回复

要查看或添加评论,请登录

Cognome的更多文章

社区洞察

其他会员也浏览了