Clinical Decision Support in Medical Imaging - all Take Home Messages summarized. More collaborators are welcomed!
Chepelev et al Journal of Digital Imaging 2021. Open Access download here: https://rdcu.be/cfQI8

Clinical Decision Support in Medical Imaging - all Take Home Messages summarized. More collaborators are welcomed!

Thank you for the kind comments regarding my explanation of Figures 1 and 2. This will be the last post for now, and it will include an explanation for Figure 3 of the paper that you can access here: https://rdcu.be/cfQI8

If you would like to collaborate on this or other big data projects in the future please reach out to me via LinkedIn, or if you would rather email me in person, please use this email: [email protected]

As before, here are the take Home Messages so far:

Take Home Message #1: 57% of advanced imaging requisitions are appropriate!  Great news and room for opportunity. 

Take Home Message #2: Almost 15% are gray – that means they did not map to an appropriate use criteria (AUC). Way better than the Demonstration Project.

Take Home Message #3: The X-axis is not time but rather the number of learning interactions. EMR research that implements this new methodology is very promising. (If you have time-based data and you are still “stuck” with your radiology or non-radiology data after reading all of these posts and re-reading the paper, I can try to help you if you email me.)

Take Home Message #4: CDS does work and the benefit of using it and sticking with it is better appropriateness scores!

Take Home Message #5: CDS really does improve scores – when you follow the EMR experience (again, do not think of time since implementation) of the people who ordered CT, MRI, US, and Nuclear Medicine studies, there is unquestionably more appropriate studies being performed in the United States as a result of CDS use.

Now for Figure 3.

Figure 3 also unequivocally proves that CDS improves the green rate (appropriateness scores for high cost imaging) using the AUC as determined by the American College of Radiology. However, Figure 3 uses another new method that is substantially different than earlier ones. This assessment methodology is complementary to Figure 2 - it is new and will likely take more manuscripts until it is intuitive. I hope it make sense to you after you read this posting. You can continue to email me if you do not want to publicly ask a question. 

Figure 3 (see below) has 0.95 confidence intervals, and for the green rate, it shows (very high) statistical significance. However, the x-axis has changed. Please remember that it is not time! Figure 3 includes all 234,035 United States ordering providers who submitted fewer than 201 requisitions each. What Leo Chepelev and the Harvard School of Public health expert statisticians did in Figure 3 is to group each one of these providers and count them in one (and only one) of 20 bins, where the bins represent the number of times that they used advanced imaging over the entire observation period. Everyone is only counted once, and everyone has a level. A provider who only ordered 9 studies (i.e. a beginner) would be in bin 1, but a provider who orders between 191 and 200 studies would be in bin 20.

Carrying out the analogy of going to the gym, each of the 234,035 United States ordering providers are ranked by the total number of times that they went to the gym, so each person has a single representation in that bin. The y-axis is the same: the green, red, and yellow rates.

No alt text provided for this image

Above is the annotated Figure 3, showing the highly significant increase in the green rate – it increases most dramatically over the first 2 bins, but the gains in appropriateness continue over the bins. Note that in this analysis, that the decrease in the red rate was not statistically significant (95% confidence interval crosses 0). Considering the “green plus red” difference was on the order of 9%, the population of providers who submitted more requisitions was also submitting significantly more requisitions scored as appropriate.

Taken alone, the results in Figure 3 could have been interpreted as completely attributable to differences in the types of people who submit more or fewer requisitions over the observed portion of their career using CareSelect. However, taken together with the results in Figure 2, we have robust statistical support for the fact that increased interactions with CDSM result in appropriateness gains within a fixed group of providers, as well as when comparing different groups of providers with different levels of CDSM experience (as in Figure 3).

To summarize and carry on with the analogy of the gym – Figure 1 looks at the shape of everyone who went to the gym after a “New Year’s resolution”, we looked at them during a particular session specified on the x-axis and we saw how they were doing during those specific sessions. Figure 2 looks at a selected population of individuals who went to the gym at least 200 times and studies with statistical rigor the improvement of this constant group of individuals as they progressed. That gives the equipment (CareSelect from Change, using all AUC from the American College of Radiology over all instances for 3 years) a fair (ie statistically valid) assessment. If we simply assumed that getting a gym membership itself had somehow improved physical strength, we would have been incorrect – and yet, this unfortunately has been done in multiple studies assessing CDSM up to this point.

Last take home message:

Take Home Message #6: Looking that the provider-based data, either on a fixed group as in Figure 2 or across groups as in Figure 3, CDS really improve scores. Figure 1 provides observational overview, and like imaging CDS papers before this one, Figure 1 considered alone is insufficient as proof of CDS effectiveness. Figures 2 and 3 prove that using CDS for high-cost imaging using CareSelect improves appropriateness scores as determined by the American College of Radiology.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了