Friday, November 11
Pretesting Methods
Fri, Nov 11, 4:00 PM - 5:25 PM
Regency Ballroom-Monroe
Web Probing: Considerations, Uses, and Practices

The Practice of Cognitive Interviewing Through Web Probing (303152)

*Stephanie L Fowler, National Cancer Institute 
Gordon Willis, National Institutes of Health 

Keywords: cognitive interviewing, pretesting, questionnaire evaluation, embedded probe, web probe, thematic coding

As the world of survey research and its related technology evolves, the set of methods we use in support of questionnaire development and evaluation must keep pace. Over the course of the past several decades, methods for questionnaire development, evaluation, and testing have largely involved the conduct of cognitive interviewing, usually conducted in a laboratory-type environment by a specially trained interviewer. However, because cognitive interviewing is a dynamic method of data acquisition, recent developments have involved several major shifts in approach. For example, there is an increasing emphasis on larger sample sizes, especially as surveys involve multiple subgroups, and are challenged by the attendant requirement of achievement of cross-cultural comparability. Further, technological developments, and especially the establishment of internet-based systems and social-media applications that facilitate access to a wide range of prospective respondents, have opened new methodological strategies for data collection.

In conjunction, these novel challenges and opportunities have led survey researchers to consider fundamentally new approaches to questionnaire pretesting, and in particular, the development of web probing as a component in our toolbox of techniques. In essence, web probing is not new, but a modernization of the decades-old procedure is generally labeled Embedded Probing (Converse and Presser, 1986). Embedded probes are used to collect information concerning the survey response process within a context that that is normally thought of as the “field environment.” Embedded probes can be read by a field interviewer, for an interviewer-administered survey, or included within a self-administered survey instrument. For example, Murphy, Edgar, and Keating (2014) evaluated the item “Since the first of May, have you or any other member of your household purchased any swimsuits or warm-up or ski suits?” by then administering the open-ended probe: “What types of items did you think of when you read this question?”

Given the requirement of a field-based data collection system, embedded probing has been used infrequently in survey pretesting, as it has proved more practical to conduct small cycles of cognitive interviews early in the questionnaire development process, obviating the requirement to develop a full system for survey fielding. However, for purposes of field-based question evaluation, as opposed to pretesting, Dorothee Behr and colleagues in Germany (e.g., Behr, Braun, Kaczmirek, & Bandilla, 2014) have been instrumental in resurrecting this procedure (see also Baena & Padilla, 2014). Further, with the advent of the Internet, and especially given means for timely access to respondents through the use of respondent panels, as well as online labor marketplaces like Amazon Mechanical Turk (mTurk), the boundary between pretesting and field data collection has increasingly been blurred. Platforms such as mTurk, along with web-based question administration systems like Qualtrics, SurveyGizmo, and SurveyMonkey, can be engaged quickly and often very inexpensively---which allows for the incorporation of field-like survey data collection prior to full-scale implementation. As such, several authors (e.g., Fowler, Willis, Moser, Ferrer, & Berrigan et al., 2015; Murphy, et al., 2014) have investigated the utility of engaging embedded probing, in the form of web probing---into pretests that involve web-survey administration.

However, several challenges to relying on web probing as a pretesting or evaluation method have been identified. The current investigation was designed to empirically investigate the following methodological issues concerning web probing:

1) The utility of probes, relative to standard cognitive interviews, using metrics such as gross productivity (i.e., length of text string entered in response to a web probe), Grounded Theory based Thematic coding, and comparison of overall themes with the results of standard cognitive testing.

2) The effects of probe placement: As for standard cognitive testing, results may vary depending on whether probes are Concurrent or Retrospective (Willis, 2005). As such, relying on the metrics describe above, we will report the results of an experiment that assigned respondents to either immediate embedded probing (i.e., probes immediately following the targeted question) versus fully retrospective probing (i.e., debriefing).

3) Inclusion of web probes within qualitatively different web-based platforms: We will assess the effects on probe productivity, and rated usefulness, of administering web probes using mTurk, as well as several internet nonprobability panels that incorporate varying approaches to recruitment and respondent contact.

4) Finally, we will synthesize the results of these studies to produce a set of integrated conclusions concerning the practice of web probing. Specifically, we will (a) summarize the strengths, limitations, and best uses of this procedure; (b) recommend specific techniques as a means toward development of best practices; and (c) suggest avenues for further research on web probing techniques.