Online Program Home
My Program

Abstract Details

Activity Number: 561
Type: Contributed
Date/Time: Wednesday, August 3, 2016 : 10:30 AM to 11:15 AM
Sponsor: Survey Research Methods Section
Abstract #321629
Title: Assessing the Reliability of Conversational Interviewing
Author(s): William Mockovak*
Companies: Bureau of Labor Statistics
Keywords: Conversational interviewing ; interviewer reliability ; interviewer consistency
Abstract:

Although not widely used, advocates of conversational interviewing argue that it reduces respondent burden and results in higher quality data in situations where the survey information being requested is complex, highly sensitive, or where the topic elicits stress. However, because it does not rely on a scripted interview, conversational interviewing also raises concerns and challenges concerning its consistency and reliability. This study explored an approach for assessing the reliability of response coding in a conversational interview by asking 86 interviewers to observe and code a video of an interview conducted conversationally. Responses coded by interviewers were then compared to gold-standard answers. To assess the impact of experience, the video observations were conducted on two occasions: once immediately after initial training, and, again, four months later in the data collection period. Results after initial training showed high levels of interviewer consistency using percent agreement with the gold-standard answers, and after four months of data-collection experience, performance significantly improved. Other measures of reliability that correct for chance agreement such as Intraclass Correlation Coefficients (ICC) and Cronbach's alpha were also explored and their use as possible tools for assessing interviewer consistency discussed.


Authors who are presenting talks have a * after their name.

Back to the full JSM 2016 program

 
 
Copyright © American Statistical Association