Abstract:
|
We describe a novel methodological approach to assessing the inter-rater reliability of assessments of the amount of shared decision-making in a patient-physician clinical encounter. Two-raters each assess recordings of patient-physician encounters across three clinical sites using the Option5 shared-decision-making tool. The desired output is the interrater intra-class correlation coefficient (ICC) of the OPTION5 scores, accounting for heterogeneity between studies and heteroscedasticity of ratings with respect to the true amount of shared decision-making. We describe a three-level hierarchical model with random effects for patient-physician encounter and clinical site, covariates for rater and other predictors, and a variance-mean function and a Bayesian estimation method. We show that the ICC varies widely depending on whether the encounters being distinguished are restricted to the same site and to the amount of shared decision making in the encounter. Because current applied practice in shared decision often ignores these subtleties, we argue for the introduction of standards regarding the definition of ICC and other measures of inter-rater reliability in this field.
|