Online Program Home
My Program

Abstract Details

Activity Number: 416 - Nonresponse Errors and Fixes
Type: Contributed
Date/Time: Tuesday, July 30, 2019 : 2:00 PM to 3:50 PM
Sponsor: Survey Research Methods Section
Abstract #304175
Title: A Comparison of Selective Versus Automatic Editing for Estimating Totals
Author(s): Chin-Fang Weng* and Joanna Fane Lineback
Companies: U.S. Census Bureau and U.S. Census Bureau
Keywords: response error; selective editing; automatic editing; data quality; imputation
Abstract:

Inevitably, surveys are subject to response errors. They may handle these errors in one of two ways—through either a selective or automated editing process. Selective editing detects unlikely influential values. Once these values are have been identified, respondents are contacted and the values may be replaced. This method produces accurate data, but can be costly and time consuming. On the other hand, automatic editing uses a mathematical model to identify and replace possible response errors. In this case, editing and imputation inaccuracies are typically not accounted for at the final stage of estimation; thus, the actual bias and variance of the final estimated total may not be accurately estimated in the survey’s official data quality index. This study, through simulation, investigates the data quality impact of selective versus automatic editing methods on estimates of totals.


Authors who are presenting talks have a * after their name.

Back to the full JSM 2019 program