Designing Questions to Measure Sensitive Behaviors among Disadvantaged Youths in ACASI
*Jennifer Dykema, University of Wisconsin Survey Center
Keywords: disadvantaged youth, questionnaire design, sensitive behaviors, ACASI
Audio computer-assisted self-interviewing (ACASI) usually yields higher reports of sensitive behaviors and allows respondents with low literacy levels to answer questions privately. However, little research has explored how differences in question wording relate to how respondents interact with ACASI interfaces.
We implemented an experiment in which respondents answered sensitive questions about delinquency, victimization, contact with the criminal justice system, and sexual partners using one of four questioning strategies. The “closed, low frequency” treatment (modeled after Add Health) used the categories 0, 1-2, 3-4, or 5 or more times. The “closed, high frequency” treatment used the categories 0, 1, 2, 3-4, 5-6, 7-8, 9-10, or 11 or more times. In the “open, standard” treatment, respondents entered a value for the total number of times, while in the “open, rate-based” treatment, they indicated number of times per day, week, month, or year. The ACASI application included both audio and text.
Data are provided by the Midwest Young Adult Study, a longitudinal CAPI study in which interviews have been conducted on a near-annual basis with eligible youth (in foster care at age 17) since 2002 in the Midwest. These youth have high engagement in sensitive behaviors and low literacy levels. Current data are from Wave 5 (2010-2011).
Analysis explores the impact of questioning strategies on the proportion and mean level of behavioral engagement. We predict that reporting will range from higher to lower as follows: open, rate-based; closed, high frequency; open, standard; and closed, low frequency (Tourangeau & Smith 1996). Although higher reports of threatening behaviors may be interpreted as more accurate, we explore other methods for evaluating data quality including internal consistency and interactions among reports, questioning strategy, literacy, response times, and respondent-technology interaction.