DIA is over for another year - what did I learn?
I learnt that the L-train is the best way to get to the conference center from the airport at Chicago O'Hare. Costs only $5 versus $40 for a taxi and you can watch the cars on the I-90 sitting in a traffic jam for much of the journey! But not many people seem to know that.
I was at DIA this year in Chicago representing the Metrics Champion Consortium. We had a poster describing how we had developed the 6 basic TMF metrics. Find out more here. And I was also presenting some survey data at a Risk-Based Monitoring (RBM) session. There are so many streams and sessions you could attend it makes your head spin. So this year, I focused on data – quality and integrity as well as Risk Based Monitoring. At the MCC, we have work groups in these areas so more background is always useful.
For data quality and integrity, the definition of critical data and processes (which can be protocol dependent) came out as a strong theme coming out of ICH E6 (R2) of course. Once you have defined these, you can take a risk-based approach to quality and focus your efforts – accepting the errors elsewhere. There will always be errors and one of the speakers talked about the human factors around errors. Are they caused by deviations, mistakes, lapses or working under pressure? Or could they even be intentional sometimes?
There is a cross-over to RBM. It requires Centralized Monitoring of data – via Key Risk Indicators and/or review of all the data for trends, patterns etc. There were some nice examples of what you can detect via Centralized Monitoring. One example was a site deciding to incentivise staff to reduce the number of repeated tests on samples. Unfortunately, repeated tests were sometimes needed due to the nature of the test. And so some staff got round it by re-using a sample that they knew would not fail in the test. They got their bonus but the data was wrong. And it was detected because results from different patients were too similar. Shocking that people would do this but shows how incentives have to be thought through carefully. Also shows the value of Centralized Monitoring to pick up such anomalies.
A term used very often in the presentations was “root cause analysis”. And I spoke with a number of the presenters on this. It’s easy to say that root cause analysis is carried out. But do people really know how to do it? When so often the result seems to be “re-training”, we can be sure that the real root cause has not been established. Anyone who knows me will know that I think DIGR? is the answer. Contact me if you want to know more.
Thank you to Linda Sullivan at the Metrics Champion Consortium for giving me the opportunity to attend!
Project Management | Head of Clinical Operations, Sanofi TSH | Founding CEO and co-creator of PRAXIS Australia | Clinical Trials innovation enthusiast | Clinical Trials Education and Consultancy | Clinical Research Nurse
7 年Sophie Mepham. A good link here to RCA: DIGR .
Retired with interests in University of Washington and CoMotion start-up team mentoring, animal rescue, cycling, and swimming.
7 年so sorry we didn't get a chance to catch up during the show