Online Program

Return to main conference page
Wednesday, September 27
Wed, Sep 27, 1:15 PM - 2:30 PM
Thurgood Marshall East
Parallel Session: Statistical Issues in Agreement Studies

Web Tools for Agreement Statistics (300474)

*Lawrence Lin, JBS Consulting Services Inc. 

Keywords: CP, TDI, CCC, Accuracy and Precision Coefficients

Four user-friendly and validated web tools, free for public use, had been developed using the R by Shiny application. These web tools are based entirely on materials presented in ‘Lin, L.I., Hedayat, A.S., and Wu, W, M. Statistical Tools for Measuring Agreement. Springer, NY (2012)’. Two web tools cover the materials presented in Chapter 2 of the book for continuous measurements. One is for the case when either the error structure is assumed constant or when absolute differences between the paired measurements are being evaluated. The other is for the case when either the error structure is assumed proportional or when percent changes between the paired measurements are being evaluated. Agreement plot along with agreement statistics (estimates and confidence limits) are presented by these two tools. Agreement statistics include coverage probability (CP), total deviation index (TDI), accuracy and precision coefficients, and their aggregate: concordance correlation coefficient (CCC). Coverage probability captures the proportion of paired observations that fall into the allowable deviation based on absolute differences or proportional changes. Total deviation index is the quantile of the allowable coverage probability based on absolute differences or proportional changes. Precision coefficient is the Pearson correlation coefficient. Accuracy coefficient measures the closeness of means as well as variances between the paired observations. Concordance correlation coefficient is the product of accuracy and precision coefficients measuring the closeness of observations from the identity or concordance line. The third web tool covers the materials presented in Chapters 3 and 5 of the book when we have at least two raters and each evaluates multiple measurements. This tool covers both the constant and proportional error cases for continuous measurements while assuming constant error for ordinal and binary measurements. It presents the same agreement statistics. The fourth web tool covers the materials presented in Chapter 6 of the book when we have at least two raters and each evaluates multiple measurements. It compares the between-raters’ deviation relative to within-raters’ or within a reference rater’s deviation. It also compares the within test-raters’ deviation relative to within reference-raters’ deviation. Various examples will be used for the web tool demonstration. No statistical formulas will be presented.