Abstract:
|
Inevitably, surveys are subject to response errors. They may handle these errors in one of two ways—through either a selective or automated editing process. Selective editing detects unlikely influential values. Once these values are have been identified, respondents are contacted and the values may be replaced. This method produces accurate data, but can be costly and time consuming. On the other hand, automatic editing uses a mathematical model to identify and replace possible response errors. In this case, editing and imputation inaccuracies are typically not accounted for at the final stage of estimation; thus, the actual bias and variance of the final estimated total may not be accurately estimated in the survey’s official data quality index. This study, through simulation, investigates the data quality impact of selective versus automatic editing methods on estimates of totals.
|