Abstract #302311

This is the preliminary program for the 2003 Joint Statistical Meetings in San Francisco, California. Currently included in this program is the "technical" program, schedule of invited, topic contributed, regular contributed and poster sessions; Continuing Education courses (August 2-5, 2003); and Committee and Business Meetings. This on-line program will be updated frequently to reflect the most current revisions.

To View the Program:
You may choose to view all activities of the program or just parts of it at any one time. All activities are arranged by date and time.

The views expressed here are those of the individual authors
and not necessarily those of the ASA or its board, officers, or staff.


Back to main JSM 2003 Program page



JSM 2003 Abstract #302311
Activity Number: 299
Type: Contributed
Date/Time: Tuesday, August 5, 2003 : 2:00 PM to 3:50 PM
Sponsor: Biometrics Section
Abstract - #302311
Title: Agreement between Two Ratings with Different Ordinal Scales
Author(s): Sundar Natarajan*+ and Stuart Lipsitz and Neil S. Klar
Companies: New York University and Medical University of South Carolina and Cancer Care Ontario
Address: 423 East 23rd St., New York, NY, 10010-5013,
Keywords: Kappa ; cross-validation ; 2x2 tables
Abstract:

Agreement studies, where different observers rate the same subject on an ordinal scale, provide important information. The weighted Kappa coefficient is a popular measure of agreement for ordinal ratings. However, in some studies, the raters use scales with different numbers of categories. For example, a patient quality of life questionnaire may ask "How do you feel today?" with possible answers ranging from 1 (worst) to 7 (best). At the same visit, the doctor reports his view of the patient's health status as very poor, poor, fair, good, or very good. The weighted kappa coefficient is not applicable here since the two scales have different number of categories. We will discuss Kappa coefficients to measure agreement when there are R categories of one rating, and S categories of another. By dichotomizing the two ratings at all possible cutpoints, there are R(R-1)S(S-1)/4 possible (2 x 2) tables. For each of these (2 x 2) tables, we will estimate the Kappa coefficient for dichotomous ratings. The largest estimated Kappa coefficients will tell us the cutpoints for the two ratings where agreement is the best. Cross-validation will be used to evaluate the method.


  • The address information is for the authors that have a + after their name.
  • Authors who are presenting talks have a * after their name.

Back to the full JSM 2003 program

JSM 2003 For information, contact meetings@amstat.org or phone (703) 684-1221. If you have questions about the Continuing Education program, please contact the Education Department.
Revised March 2003