Online Program


Using Latent Variable Modeling and Multiple Imputation to Calibrate Rater Bias: Application to a Diagnosis of Posttraumatic Stress Disorder
Robert D Gibbons, University of Illinois at Chicago 
Bonnie L Green, Georgetown University Medical Center 
*Juned Siddique, Northwestern University 

Keywords: Bayesian, censoring, latent variable, Multiple Imputation, Ordinal Probit, PTSD

We describe an approach that uses latent variable modeling and multiple imputation to calibrate rater bias when one group of raters tends to be more lenient than another. We apply our model to diagnoses of posttraumatic stress disorder (PTSD) from a depression study where nurse practitioners were twice as likely as clinical psychologists to diagnose PTSD despite the fact that participants were randomly assigned to either a nurse or a psychologist. Our method assumes there exists an unobserved moderate PTSD category. Nurses assign a positive diagnosis to participants in this category, leading to high rates of PTSD. Psychologists assign a negative diagnosis to participants in this category, leading to low rates of PTSD. We present a Bayesian random effects censored ordinal probit model which allows us to calibrate the PTSD diagnoses across rater types by multiply imputing the moderate PTSD category. Our model appears to balance PTSD rates across nurses and psychologists and provides a good fit to the data. It also preserves between-rater variability. After calibrating the diagnoses of PTSD across rater types, we perform an analysis looking at the effects of comorbid PTSD on changes in depression scores over time. Our results are compared to an analysis that uses the original diagnoses and we show that calibrating the PTSD diagnoses can result in different inferences.