Rigorous study-wide training and centralized rater monitoring help to minimize subjectivity
The failure rate of central nervous system (CNS) clinical trials is high, with less than 27 percent of studies successfully transitioning out of phase 2 studies.1 The major causes of CNS trial failure are the placebo response, medication adherence, and reliance on subjective endpoints such as pain or psychiatric rating scales. CNS studies rely heavily on ratings accuracy to demonstrate the efficacy and safety of investigational drugs, but subjectivity and rater consistency are often overlooked.
Challenges to rater consistency
Standardizing rater assessments is essential for reducing ratings variability across sites and studies. However, there are numerous challenges to rater consistency:
- Variability in skill, training and consistency among raters
- Country-to-country differences in common rating scales and diagnostic practices
- Differences among scales that are commonly used in clinical practice and those used in clinical trials
- Diverse cultural and language needs that may impact accurate administration of rating scales
In addition, there is a misconception among many sponsors that raters do not require study-specific training. Moreover, site investigators may not be accustomed to standardizing assessments in a clinical trial context.
3 Strategies for improving rater consistency and minimizing subjectivity
CNS trials often involve multiple complex scales and outcomes measures, and protocols may even require separate raters for safety and efficacy assessments. Signal detection depends on rating consistency and accuracy, among individual raters and study sites.
1. Standardize, study-wide rater training
There is currently no accepted standard for selecting and training raters to administer scales in CNS trials, and raters may vary widely in their prior training and certification. Without standardized training, raters may use different assessment or interview methodologies across clinical trial participants, study visits, or scales. Developing and delivering a structured rater training program at the outset of a study, with periodic retraining throughout the duration of the trial, can help to address this variability.
This study-specific, protocol-customized program should include a didactic component on the disease process, assessment procedures, and scoring conventions and an applied or practical component. Incorporating the use of structured interviews, where all raters ask the same questions in the same manner across all sites and during every study assessment, helps ensure both intra- and inter-rater reliability. Structured interviews also help maintain rater neutrality and minimize inadvertent expectation setting. As part of the training program, it may also be useful to establish minimum rater qualifications, based on experience with the administration of the rating scale(s) required for the study. Rigorous training helps increase study-wide consistency, leading to accurate, reproducible data.
It may also be useful to develop training courses that help patients improve their ability to accurately and consistently self-report pain. An added benefit of patient training is that it may mitigate the placebo response.
2. Centralized rater programs
Sponsors may also consider utilizing a centralized rater program, where assessments are performed remotely via phone or video conference by a cohort of experienced, calibrated raters. With remote assessments, raters are subject to fewer biases as they are not privy to details of the study or the patient’s clinical history. Centralized rater programs also help reduce site and patient burden, which may help to accelerate study startup and enrollment.
Another advantage of centralized rater programs is increased clinical trial access for both sites and patients. The use of remote raters enables sites that might not otherwise have qualified rater resources to participate in CNS studies, expanding the pool of eligible patients and enhancing clinical trial diversity and equity.
3. Centralized rater monitoring
Centralized rater monitoring or rater surveillance programs can help to minimize subjectivity by providing oversight of key assessments. Monitors can perform independent or confirmatory reviews of assessments to ensure that scoring and administration conventions were followed, and appropriate interview techniques were used.
Moreover, being able to visualize clinical assessment data across an entire study helps monitors and sponsors detect patterns or potential issues in rater consistency and accuracy. For instance, if a site is generating unusually high or low values, the issue can be investigated, and additional training can be provided. Centralized rater monitoring also includes scheduled assessments of rater consistency and reliability to help mitigate rater drift which can adversely impact data quality, particularly in prolonged studies. If drift is detected, immediate refresher training can be deployed.
Key Takeaway
Embracing the development and implementation of rigorous study-wide training for all raters helps to increase consistency and minimize subjectivity. For studies where remote assessments are feasible, the use of centralized raters can further reduce bias. Centralized rater monitoring helps detect, mitigate, and remediate inter- or intra-rater drift during the study, ensuring that rating assessments can be relied upon as primary or secondary endpoints.
Precision for Medicine has vast experience in managing rated studies, including both pediatric and adult CNS clinical trials. We have been involved in over 60 CNS clinical trial programs and more than 600 CNS commercialization projects. In addition to our familiarity with the common scales used in CNS and psychiatry, we have a network of vendors specializing in rater training and monitoring. To learn more about how Precision for Medicine supports successful CNS clinical trial execution, click here.
Reference:
1. BIO, PharmaIntelligence, Quantitative Life Sciences. Clinical Development Success Rates and Contributing Factors 2011-2020, February 2021. Available at https://go.bio.org/rs/490-EHZ-999/images/ClinicalDevelopmentSuccessRates2011_2020.pdf.