Wednesday, November 9
Wed, Nov 9, 8:00 AM - 5:30 PM
Promenade Lower
Registration
QDET2 Hours
Wed, Nov 9, 9:00 AM - 12:30 PM
Hibiscus A
Short Course 1 - An Introduction to Question Design and Evaluation 101
Short Course
Instructor(s): Jack Fowler, University of Massachusetts Boston
(Beginner/Intermediate)
This course will provide an introduction to the standards for questions to measure objective facts and subjective states, as well as how to evaluate questions to ascertain whether the standards are met. Different methods of evaluation will be reviewed, including standard question appraisals, focus groups, cognitive interviewing, pretests, behavior coding, and split ballot tests.
Wed, Nov 9, 9:00 AM - 12:30 PM
Hibiscus B
Short Course 2 - Current Developments in Cognitive Interviewing of Survey Questions
Short Course
Instructor(s): Gordon Willis, National Institutes of Health
(Beginner/Intermediate) Questionnaire developers are increasingly faced with the challenges of testing survey questions in an environment that demands quick turn-around and cost savings, while at the same time accommodating multiple cultural/language groups, survey administration modes and devices, and so on. This short course will address these evolving trends and cover the following themes:
a) The (modern) world of cognitive testing b) Efficient ways to incorporate cognitive interviewing into pretesting and evaluation c) Achieving cross-cultural comparability d) Approaches to the analysis of cognitive interviewing data
Wed, Nov 9, 9:00 AM - 12:30 PM
Jasmine
Short Course 3 - Smartphones, Smart Questionnaires? The Challenges of Delivering Surveys via Mobile Device
Short Course
Instructor(s): Michael Link, AbtSRBI
(Intermediate/Advanced) The smartphone revolution has dramatically changed how people communicate, obtain information, and go about their daily lives, but how has survey administration changed as a result? This course provides an assessment of our current understanding of mobile-based survey administration, some developing best practices, and how other smartphone tools and apps might be used in place of traditional survey questionnaires. In particular, the course will cover the following key themes:
1. Findings and gaps in research on mobile-delivered survey questionnaires 2. Differences in questionnaires delivered on smartphones via text, web, and apps 3. Current state best practices in this area 4. How smartphone tools and applications (such as GPS, scanning, and data collection apps) are changing how we think about data collection via mobile devices
Wed, Nov 9, 2:00 PM - 5:30 PM
Hibiscus A
Short Course 4 - Writing and Pretesting Cross-Cultural Questionnaires
Short Course
Instructor(s): Ana Villar, City University London
(Beginner/Intermediate) This course will focus on the latest developments in questionnaire design and pretesting in cross-cultural surveys. It will start with an overview of existing models applied successfully in ongoing cross-national projects and a discussion of the role of cross-cultural input in the process of designing, pretesting, and evaluating questions. We will then review strategies to plan and manage cross-cultural question design efforts. Using examples of actual questions from cross-cultural surveys, we will then consider the pretesting techniques available to researchers embarking on question design for cross-cultural surveys. Participants are encouraged to bring questions and materials designed for cross-cultural contexts for discussion.
Wed, Nov 9, 2:00 PM - 5:30 PM
Jasmine
Short Course 5 - Quantitative Methods for Testing Questions
Short Course
Instructor(s): Daniel Oberski, Tilburg University
(Intermediate/Advanced) Measurement error is a key aspect of total survey error and can severely bias survey analyses of substantive interest such as means, proportions, correlations, and regression coefficients. To prevent such biases, it is essential to evaluate the extent to which respondents' answers to survey questions are affected by measurement error. This short course presents a broad overview of quantitative approaches to doing so. These include classical psychometric concepts such as test-retest reliability, criterion validity, and consistency reliability, but also modern techniques such as multitrait-multimethod experiments and SQP (survey quality predictor), quasi-simplex models, correspondence analysis, and latent class analysis. Based on existing survey data, we will go over the basic ideas behind these techniques and discuss their relative benefits and drawbacks.
Wed, Nov 9, 2:00 PM - 5:30 PM
Hibiscus B
Short Course 6 - Usability Testing for Survey Research: How to and Best Practices
Short Course
Instructor(s): Emily Geisen, RTI International; Jen Romano-Bergstrom, Facebook
(Beginner/Intermediate) Usability testing in survey research allows in-depth evaluation of how respondents and interviewers interact with self-administered questionnaires. For example, a respondent may understand the question and response options, but may be unable to select their answer accurately on a small screen. Although there is a growing body of literature on best practices for web surveys and mobile devices, not all design guidelines work equally well for all surveys. In addition, the capabilities of computerized surveys are constantly emerging. It is critical for researchers to evaluate, test, and modify computerized surveys as part of the survey pretesting process. Like other pretesting methods, the primary goal of usability testing surveys is to improve data quality and reduce respondent burden.
In this course, we will: a) Describe what usability testing is and why it is needed in survey research b) Discuss how to apply usability testing to survey research by building off the survey literature and best practices c) Describe the basic methods for conducting usability testing, such as developing usability testing scenarios and tasks d) Provide real-life examples for applying these methods to surveys e) Discuss how to incorporate iterative usability testing into the survey pretesting process in a cost-effective and timely manner
Promenade Lower
QDET2 Hours
Hibiscus A
Short Course
(Beginner/Intermediate)
This course will provide an introduction to the standards for questions to measure objective facts and subjective states, as well as how to evaluate questions to ascertain whether the standards are met. Different methods of evaluation will be reviewed, including standard question appraisals, focus groups, cognitive interviewing, pretests, behavior coding, and split ballot tests.
Hibiscus B
Short Course
(Beginner/Intermediate) Questionnaire developers are increasingly faced with the challenges of testing survey questions in an environment that demands quick turn-around and cost savings, while at the same time accommodating multiple cultural/language groups, survey administration modes and devices, and so on. This short course will address these evolving trends and cover the following themes:
a) The (modern) world of cognitive testing b) Efficient ways to incorporate cognitive interviewing into pretesting and evaluation c) Achieving cross-cultural comparability d) Approaches to the analysis of cognitive interviewing data
Jasmine
Short Course
(Intermediate/Advanced) The smartphone revolution has dramatically changed how people communicate, obtain information, and go about their daily lives, but how has survey administration changed as a result? This course provides an assessment of our current understanding of mobile-based survey administration, some developing best practices, and how other smartphone tools and apps might be used in place of traditional survey questionnaires. In particular, the course will cover the following key themes:
1. Findings and gaps in research on mobile-delivered survey questionnaires 2. Differences in questionnaires delivered on smartphones via text, web, and apps 3. Current state best practices in this area 4. How smartphone tools and applications (such as GPS, scanning, and data collection apps) are changing how we think about data collection via mobile devices
Hibiscus A
Short Course
(Beginner/Intermediate) This course will focus on the latest developments in questionnaire design and pretesting in cross-cultural surveys. It will start with an overview of existing models applied successfully in ongoing cross-national projects and a discussion of the role of cross-cultural input in the process of designing, pretesting, and evaluating questions. We will then review strategies to plan and manage cross-cultural question design efforts. Using examples of actual questions from cross-cultural surveys, we will then consider the pretesting techniques available to researchers embarking on question design for cross-cultural surveys. Participants are encouraged to bring questions and materials designed for cross-cultural contexts for discussion.
Jasmine
Short Course
(Intermediate/Advanced) Measurement error is a key aspect of total survey error and can severely bias survey analyses of substantive interest such as means, proportions, correlations, and regression coefficients. To prevent such biases, it is essential to evaluate the extent to which respondents' answers to survey questions are affected by measurement error. This short course presents a broad overview of quantitative approaches to doing so. These include classical psychometric concepts such as test-retest reliability, criterion validity, and consistency reliability, but also modern techniques such as multitrait-multimethod experiments and SQP (survey quality predictor), quasi-simplex models, correspondence analysis, and latent class analysis. Based on existing survey data, we will go over the basic ideas behind these techniques and discuss their relative benefits and drawbacks.
Hibiscus B
Short Course
(Beginner/Intermediate) Usability testing in survey research allows in-depth evaluation of how respondents and interviewers interact with self-administered questionnaires. For example, a respondent may understand the question and response options, but may be unable to select their answer accurately on a small screen. Although there is a growing body of literature on best practices for web surveys and mobile devices, not all design guidelines work equally well for all surveys. In addition, the capabilities of computerized surveys are constantly emerging. It is critical for researchers to evaluate, test, and modify computerized surveys as part of the survey pretesting process. Like other pretesting methods, the primary goal of usability testing surveys is to improve data quality and reduce respondent burden.
In this course, we will: a) Describe what usability testing is and why it is needed in survey research b) Discuss how to apply usability testing to survey research by building off the survey literature and best practices c) Describe the basic methods for conducting usability testing, such as developing usability testing scenarios and tasks d) Provide real-life examples for applying these methods to surveys e) Discuss how to incorporate iterative usability testing into the survey pretesting process in a cost-effective and timely manner