Online Program Home
My Program

Abstract Details

Activity Number: 492
Type: Contributed
Date/Time: Wednesday, August 3, 2016 : 8:30 AM to 10:20 AM
Sponsor: Survey Research Methods Section
Abstract #319596 View Presentation
Title: Assessing the Reliability of Conversational Interviewing
Author(s): William Mockovak*
Companies: Bureau of Labor Statistics
Keywords: Conversational interviewing ; interviewer reliability ; interviewer consistency ; calibration training ; interviewer performance
Abstract:

Although not widely used, advocates of conversational interviewing argue that it reduces respondent burden and results in higher quality data in situations where the survey information being requested is complex, highly sensitive, or where the topic elicits stress. However, because it does not rely on a scripted interview, conversational interviewing also raises concerns and challenges concerning its consistency and reliability. This study explored an approach for assessing the reliability of response coding in a conversational interview by asking 86 interviewers to observe and code a video of an interview conducted conversationally. Responses coded by interviewers were then compared to gold-standard answers. To assess the impact of experience, the video observations were conducted on two occasions: once immediately after initial training, and, again, four months later in the data collection period. Results after initial training showed high levels of interviewer consistency using percent agreement with the gold-standard answers, and after four months of data-collection experience, performance significantly improved. Other measures of reliability that correct for chance agreement such as Intraclass Correlation Coefficients (ICC) and Cronbach's alpha were also explored and their use as possible tools for assessing interviewer consistency discussed.


Authors who are presenting talks have a * after their name.

Back to the full JSM 2016 program

 
 
Copyright © American Statistical Association