Aggregating Survey Questions of Hierarchical Topics: Assessing the Trade-Off Between Accuracy and Burden Through Cognitive Interviewing (303367)Brandon Kopp, Bureau of Labor Statistics
Jennifer Crafts, Westat
*Rachel Tesler, Westat
Laura Erhard, Bureau of Labor Statistics
Erica Yu, Bureau of Labor Statistics
Keywords: cognitive testing, question pretesting, respondent burden, interviewer burden
The Bureau of Labor Statistics (BLS) is redesigning the Consumer Expenditure Survey (CE), which provides data on the buying habits of America’s consumers. One of the CE components, the in-person recall interview, requires respondents to report their household’s expenditures for 38 high-level expenditure categories for the preceding three-month reference period. These high-level categories (e.g., furniture) are currently broken down into over 100 questions covering more specific items (e.g., sofas). Asking a “global” expenditure question (e.g., cost of living room furniture or cost of all household furniture) requires less time and is less burdensome for both interviewer and respondent than asking a longer series of questions about the cost of individual items within a category. This can increase measurement error, however, if respondents do not understand what should/should not be included in the category or if the question wording is too vague to prompt their memory.
A qualitative study to test 116 global questions covering all 38 expenditure categories was completed in 2015. The main objective was to identify the appropriate level of aggregation and wording of questions needed to capture accurate expenditure data. Westat conducted a total of 85 cognitive interviews in nine iterative rounds to test and refine the questions. Several retrospective probing techniques were used to assess respondents’ (1) comprehension of the aggregation level for the questions and (2) accuracy of reporting (i.e., items that were both applicable for the question and correct for the reference period). This presentation will cover the study methods and results. The focus will be the lessons learned in balancing specificity vs. generality in wording of the questions, with the goal of achieving accurate expenditure reporting and minimizing the number of questions.