ADAS-Cog, ADCS-ADL, CDR, and MMSE Rater Performance in TRAILBLAZER-ALZ-2
Due to the length and complex nature of many common outcome measures in Alzheimer’s Disease clinical trials, it’s common for raters to make errors in scale administration and scoring. Assessments such as the ADAS-Cog and CDR require extensive training and experience for accurate data collection.
However, even with high quality preparation, raters will still make errors. This reality supports the implementation of central monitoring programs to review and correct rater performance, provide feedback and recalibration, and improve data accuracy.
Lessons from the TRAILBLAZER-ALZ-2 Rater Performance Central Monitoring Program
Data shared in an AD/PD poster—co-authored by Cogstate—highlight learnings from a rater performance monitoring program used in the TRAILBLAZER-ALZ 2 Randomized Clinical Trial of Donanemab in Early Symptomatic Alzheimer Disease.
Rater performance was analyzed across four outcome measures that were part of the study and are frequently leveraged in AD clinical trials: ADAS-Cog 13, ADCS-ADL, CDR, and MMSE.
Review of assessments showed deviation from standard guidelines in 40-60% of administrations. The most errors were observed on the ADAS-Cog (the longest and most complex scale in the battery).
Findings from this data indicate the importance of centrally monitoring rater performance to ensure data integrity and consistency throughout studies, particularly those utilizing complex and lengthy scales.