ePrivacy and GPDR Cookie Consent by Cookie Consent
The leading association
of public opinion and
survey research professionals
American Association for Public Opinion Research

Mixed Mode Surveys 101

Click here to purchase this webinar kit.

Member Price:  $220.00
Nonmember Price:  $295.00

Student pricing available

Why Do a Mixed-Mode Survey?
About This Course:
Mixed-mode designs are likely to produce higher quality results than single mode surveys in today’s survey environment. Don Dillman, who has been designing and implementing mixed-mode surveys for nearly 30 years, will discuss the reasons mixed-mode surveys have become the design of choice for many surveys. He will explain why the benefits of mixed-mode designs often has less to do with people’s mode preferences for responding by a particular mode than it does the power of multiple contact modes to overcome coverage and response rate limitations associated with single mode surveys. In this webinar, which draws heavily from his research, the strengths and weaknesses of various mixed-mode designs will be discussed, while emphasizing the evolution towards a “web-push” approach that starts with mail contact. He will discuss the reasons that such methods are now experiencing increased use in countries throughout the world as an alternative to single mode telephone and in-person surveys. Also discussed is the need for unified mode construction of across survey modes that has become especially important as the use of smartphones continues to increase. Participants will gain insight into best practices for creating effective mixed-mode designs.

Learning Objectives:
  • To learn the ways that survey data quality can be improved through the use of mixed-mode survey designs.
  • To understand the individual contributions of contact mode, response mode, and other aspects of mixed-mode data collection effects on survey data quality.
  • To provide respondents with examples of how specific mixed-mode designs such as web-push methods, can be created and implemented.

Single and Multi-Mode Surveys Using Address-Based Sampling
About This Course:

The course will present an overview of address-based sampling (ABS) for survey design within its historical context.  Emphasis will be given to the typical and specialized challenges encountered in ABS surveys in real-world situations. 

Learning Objectives:
  • Discuss the issues related to design and implantation of ABS studies, including studies targeting areas and populations.

A "How To" Course on AAPOR Response Rate Calculations and Practical Examples from the Field
About This Course:

Recently, the Standard Definitions Committee revised the AAPOR Response Rate Calculator to accommodate many different types of surveys, including dual frame RDD telephone (DFRDD); address-based sample studies, opt-in panels, and others.  This follows a number of revisions to the AAPOR Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys Report that have occurred over the past three years.  This webinar will provide a walkthrough of the particulars for calculating AAPOR response rates for each different kind of studies, and provide practical examples of each.  Furthermore, we will review the calculations upon which response rates are built, including not just overall response but cooperation, refusal and contact rates.  We will explore why the Standard Definitions Committee chose to provide a new formula for calculating DFRDD surveys and surveys with required screeners, and again review some examples from recent public studies.  The course will also cover considerations in defining study-specific outcome dispositions to official AAPOR outcome dispositions and discuss different approaches in estimating “e” for different types of surveys.

Learning Objectives:
  • To understand intimately AAPOR calculations for response, refusal, contact and cooperation
  • To understand how to use the AAPOR response rate sheets, when to use which one and how they differ
  • To learn how “e” impacts response rates and how and when to consider using different estimates of e and in particular the newer AAPOR calculation for DFRDD and screening studies

Planning and Implementing Responsive Designs
About This Course:

Surveys are frequently designed with a great deal of uncertainty about key parameters. Responsive designs are a strategy for dealing with this uncertainty. These designs identify potential risks related to costs or errors, develop indicators for tracking these risks, and then plan design changes for controlling these costs or errors. These responsive design options are triggered if the indicators cross pre-specified thresholds. This presentation starts from a definition of the basic principles of responsive and adaptive designs and then provides concrete examples of the implementation of these designs. These examples are drawn from a variety of settings, including face-to-face, telephone, and mixed-mode surveys.

Learning Objectives:
  • Describe the basic principles of responsive and adaptive survey designs.
  • Identify situations where responsive designs may be appropriate.
  • List several responsive design options that may be used in a variety of settings.

Good Questionnaire Design: Best Practices in the Mobile Era
About this Course

How long should my scales be? Should they be fully- or end-labeled? With the increasing use of smartphones for online surveys, questionnaire designers are being challenged to measure ideas with greater simplicity while maintaining high validity. In this webinar, we will discuss the nature of measurement and optimization for an array of devices, including number of scale points, semantic labeling, alternatives to traditional grids, and ways to reduce the number of items used in a survey.

Learning Objectives
  • Identify a proper scale for the measurement task.
  • Evaluate the question fit for mobile devices.
  • Describe item reduction techniques to reduce respondent burden.

The Questionnaire Design Pitfalls of Multiple Modes
About This Course:

The increasing cost of fieldwork and declining survey budgets is pushing survey practitioners to look for cheaper ways of collecting survey data. For example, this could be through encouraging a worthwhile portion of respondents to complete questionnaires by web rather than by the more traditional modes such as postal questionnaires, face-to-face and telephone interviews. However, mixing modes of data collection can reduce data comparability because people may answer questions differently depending on the mode. In this webinar, we will provide a conceptual framework for understanding the causes of these measurement differences and present a typology of questions based on the factors that cause measurement differences by mode. From our own studies in the UK, and from what is known in the literature, we will highlight some of the questionnaire design pitfalls of using multiple modes and make recommendations for improving the portability of these question types across modes. We will close the webinar with a discussion of the methods that can be applied to assess any remaining measurement differences.

Learning Objectives:
  • Understand the theoretical and practical difference in how respondents react to different modes of data collection.
  • Have greater awareness of specific question attributes that make certain questions less portable across modes.
  • Have greater knowledge and confidence in executing their own mixed modes questionnaires.