ePrivacy and GPDR Cookie Consent by Cookie Consent
   
AAPOR
The leading association
of public opinion and
survey research professionals
American Association for Public Opinion Research

Survey Research 101

Click here to purchase this webinar kit.

Member Price:  $220.00
Nonmember Price:  $295.00

Student pricing available

Questionnaire Design
About This Course:
This webinar addresses a specific questionnaire format that asks respondents to report whether (or the extent to which) they agree or disagree with a statement. Specifically, the course will review the major problems with using agree-disagree questions and scales composed of agree-disagree questions, discuss the ways that these questions may introduce error into survey measures, consider when it may be appropriate to use agree-disagree questions, and discuss ways to revise agree-disagree questions to use other formats. Examples using existing scales and survey questions will be given.

Learning Objectives:
  • Understand the major problems with agree-disagree questions.
  • Understand when it is appropriate to use agree-disagree questions and when it is more appropriate to consider using a different format.
  • Be able to revise agree-disagree questions into a different format to avoid the problems with these types of questions.


Non-probability Sampling for Finite Population Inference
About This Course:

Although selecting a probability sample has been the standard for decades for making inferences from a sample to a finite population, incentives are increasing to use data obtained without a defined sampling mechanism, i.e., non-probability samples. In a world of “big data”, large amounts of data are readily available through methods that are faster and need fewer resources relative to most probability-based designs.  There are many ways of collecting data now without a pre-specified sampling design—volunteer web panels, tele-voting, expert selection, respondent-driven network sampling, and others—none of which require probability samples. 
 
Design-based inference, in which population values are estimated through the random sampling procedure specified by the sampler, cannot be used for non-probability samples. One alternative is quasi-randomization where pseudo-inclusion probabilities (referred to as propensity scores) are estimated from covariates available for both sample and nonsample units. Another estimation approach is superpopulation modeling; analytic variables collected on the sample units are used in a model to predict values for the nonsample units. Variances of estimators can be computed using replication methods or approaches derived using modelling. We include several simulation studies to illustrate the properties of these approaches and discuss the pros and cons of each.  

Learning Objectives:
  • Understand the different types of non-probability samples currently in use
  • Understand how non-probability samples can be affected by coverage errors, nonresponse, and measurement errors
  • Understand what methods of estimation can be used for non-probability samples and the arguments used to justify them


Design and Weighting for Dual Frame Surveys
About This Course:

The course will describe the reasons for considering dual frame surveys and the conditions under which the design is efficient. Dual frame designs with screening and overlapping units will be defined and the benefits and problems associated with each type of design will be discussed. Approaches to weighting dual frame surveys will be outlined, with an emphasis on the types of information needed to produce the weights and the types of errors (sampling and nonsampling errors) that are typically encountered in practice.

Learning Objectives:
  • Describe the advantages and disadvantages of dual frame surveys.
  • Identify principles for designing dual frame surveys.
  • List methods for weighting dual frame surveys.


The Usage of Incentives in Survey Research
About This Course:

In developing this webinar, the instructor has brought to bear his training as a research psychologist throughout his thinking about the prudent use of incentives in surveys research during the past four decades. This includes during his seven years as Nielsen’s chief methodologist while conceptualizing, interpreting, and applying the findings from an extensive series of large national factorial experiments of different aspects of survey incentives. He also has continued to experiment with incentives during his recent years as an independent consultant. The webinar will focus on a framework that survey researchers should use to carefully determine how to choose, deploy, and evaluate the incentives they will use in their surveys. Topics will include: (a) Possible goals that incentives are meant to achieve (e.g., improving response rates, improving data quality, reducing nonresponse bias, reducing total survey costs); (b) Which respondents will be chosen to receive incentives; (c) The types of incentives that can be used (e.g., contingent and/or noncontingent; cash and/or noncash; fixed and/or differential); (d) Ethical considerations in choosing the incentives that will be deployed; (e) Costs implications the chosen incentives will have; and (f) How to evaluate the impact of the chosen incentives.

Learning Objectives:
  • Decide whether or not to use incentives in a given survey project.
  • What theory suggests about the possible effects of incentives.
  • What are likely to be the most cost-effective incentives in light of the chosen goals of the incentives.
  • Decide whether the uses of incentives are ethical.
  • Determine the “true” cost of incentives.
  • Evaluate the effects of survey incentives.
  • Use the Incentive Template, which is provided as part of the webinar, to structure the myriad explicit decisions that should be made about using incentives in a specific survey research project.


Improving Surveys with Paradata: Making Use of Process Information
About this Course

This is an introductory course designed to cover the paradata topic in three parts. The first part will cover the role of paradata within the Total Survey Error Framework. For each step in the survey production process, the potential of paradata to estimate or reduce the respective error will be discussed. The second part will showcase individual surveys in which paradata have been used. Research examples will be discussed, including but not limited to the use of paradata to monitor fieldwork activity, guide intervention decisions and perform post-hoc analyses. The course will close with a discussion of challenges and current research problems in the use of paradata.

Learning Objectives
  • Identify paradata along the production process.
  • Summarize challenges when collecting and using paradata.
  • Develop ideas for the use of paradata in the participants' own surveys.


A "How To" Course on AAPOR Response Rate Calculations and Practical Examples from the Field
About This Course:

Recently, the Standard Definitions Committee revised the AAPOR Response Rate Calculator to accommodate many different types of surveys, including dual frame RDD telephone (DFRDD); address-based sample studies, opt-in panels, and others.  This follows a number of revisions to the AAPOR Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys Report that have occurred over the past three years.  This webinar will provide a walkthrough of the particulars for calculating AAPOR response rates for each different kind of studies, and provide practical examples of each.  Furthermore, we will review the calculations upon which response rates are built, including not just overall response but cooperation, refusal and contact rates.  We will explore why the Standard Definitions Committee chose to provide a new formula for calculating DFRDD surveys and surveys with required screeners, and again review some examples from recent public studies.  The course will also cover considerations in defining study-specific outcome dispositions to official AAPOR outcome dispositions and discuss different approaches in estimating “e” for different types of surveys.

Learning Objectives:
  • To understand intimately AAPOR calculations for response, refusal, contact and cooperation
  • To understand how to use the AAPOR response rate sheets, when to use which one and how they differ
  • To learn how “e” impacts response rates and how and when to consider using different estimates of e and in particular the newer AAPOR calculation for DFRDD and screening studies