AAPOR
The leading association
of public opinion and
survey research professionals
American Association for Public Opinion Research

Short Courses

The 2021 Conference will include eight short courses to enhance your learning experience. These in-depth, half-day courses are taught by well-known experts in the survey research field and cover topics that affect our ever-changing industry. The 2021 Short Courses include:

Course 1: Doing Reproducible Research: Best Practices and Practical Tools for the Social Sciences
Course 2: Designing Survey Experiments when Treatment Effects are Heterogeneous
Course 3: Transitioning from Interviewer-Administered Surveys to "Push to Web" with a Focus on Questionnaire Design and Mode Effects
Course 4: Biosocial Data Collection and Analysis 
Course 5: Deepening and Enriching Qualitative Data Collection and Analysis with Creative Methods
Course 6: Identifying and Correcting Errors in Big Data
Course 7: Using (Non-)Probability Sample Surveys for Public Opinion Research
Course 8: Tidy Survey Analysis in R using the srvyr Package
 
Short Course 1

Doing Reproducible Research: Best Practices and Practical Tools for the Social Sciences


Presenter: Alex Cernat
Time: Monday, May 3, 10:00 AM - 1:30 PM Eastern Time


Recent years have seen an increase in the amount and complexity of data available in the social ‎sciences. At the same time, the social sciences are facing a reproducibility crisis as previous findings ‎often fail to replicate. Both of these trends highlight the need for improving reproducibility and ‎collaboration practices. This is especially important as reproducible research practices are rarely ‎covered in traditional academic training.‎

In this course, we will cover the main concepts used in reproducible research as well as the best ‎practices in the field. After a general introduction we will cover some of the tools that researchers can ‎use to help them in this process. More precisely, you will learn how Github and Rstudio projects can ‎facilitate reproducibility in the popular R software. Additionally, you will get hands on experience in ‎the creation of reproducible documents using Knitr. Lastly, you will learn how all these tools can be ‎used together to create a reproducible research workflow.‎

Return to Top
 
Short Course 2

Designing Survey Experiments when Treatment Effects are Heterogeneous


Presenter: Elizabeth Tipton
Time: Monday, May 3, 2:00 PM - 5:30 PM Eastern Time


Survey experiments have the potential to provide treatment effect estimates that are both causal and ‎generalizable to a clearly defined target population. These treatment effects, however, are averages ‎which can obscure important heterogeneity. That is, it is possible for the average effect to be very, ‎very small and yet for there to exist one or more subgroups for whom the effect is actually quite large. ‎Typical methods for the design of survey experiments, however, focus only on this average, leaving ‎questions of heterogeneity for post-hoc analyses.‎ But what if, instead, survey experiments anticipated this heterogeneity and were planned to study it? ‎In this short course, I will provide the background necessary to do just this. This will include the ‎generation of potential theoretical mechanisms for heterogeneity, the identification and prioritization ‎of hypotheses regarding this heterogeneity, and the development of study designs that allow for these ‎hypotheses to be tested. We will discuss various statistical concerns – including issues of causality with ‎moderators and of statistical power – and examine how studies can be designed to incorporate them. ‎The course will include case studies and group discussion; example analyses will be provided in R, but ‎prior knowledge is not required.‎

Return to Top
 
Short Course 3

Transitioning from Interviewer-Administered Surveys to "Push to Web" with a Focus on Questionnaire Design and Mode Effects


Presenter: Pam Campanelli
Time: Tuesday, May 4, 10:00 AM - 1:30 PM Eastern Time


Many researchers have been moving away from interviewer-based surveys due mainly to ‎cost and more recently due to COVID-19 restrictions on face-to-face data collection. This ‎course explores mode differences between interviewer-administered surveys and web ‎surveys with a focus on questionnaire design differences. Over 60 key points will be ‎explored under the themes of interviewer requirements and presence, web survey ‎requirements and options, mode differences between interviewer-administered and web ‎surveys from obvious aspects such as questionnaire length, fieldwork length, cost, and ‎response rate to less obvious issues such as measurement errors due to type and format of ‎question, visual concerns, unsuspected issues with HTML formats and how software that ‎says it caters for smart phones may still can create problems. The course ends with what is ‎‎“push to web” and why it is useful. Throughout there is a focus on working towards best ‎practice across modes. This course will be highly interactive and is designed to mirror in-‎person training (including breakout group activities). It is not a webinar.‎

Return to Top
 
Short Course 4

Biosocial Data Collection and Analysis


Presenters: Jessica Faul and Colter Mitchell
Time: Tuesday, May 4, 2:00 PM - 5:30 PM Eastern Time


Over the last decade there has been a rapid increase in the collection and ‎availability of biological data collected as part of larger investigations in to joint ‎effects of social and biological factors on health and behavior. However, the ‎vast majority of the data collected to date used convenience samples in ‎clinics, labs, and hospitals. Further, as more population-based studies have ‎started collecting biological samples, protocols for moving collections from the ‎lab to the fields have not fully been examined. Finally, appropriate statistical ‎techniques more common in the social sciences are rarely used in biosocial ‎work. ‎
The purpose of this course will be to familiarize survey methodologists with the ‎collection, availability, and analysis of current biosocial data. Hands-on ‎experience with collection, paired with lectures will provide an introductory ‎knowledge of the field. Key goals will be to spur insight and possible ‎examination into the total survey error surrounding biological data collection and ‎analysis. Although some limited survey methodological work has been conducted ‎‎(and will be addressed), the majority of the time will focus on and overview of ‎the entire collection to analysis process and existing gaps.‎

Return to Top

Short Course 5

Deepening and Enriching Qualitative Data Collection and Analysis with Creative Methods


Presenter: Nicole Brown
Time: Wednesday, May 5, 10:00 AM - 1:30 PM Eastern Time


The aim of this interactive workshop is to explore creativity within research, to identify ‎opportunities to use creative methods within the research process and to consider ‎analysis in qualitative research with a specific focus on how to treat and deal with data ‎that is not textual, but comes out of the use of creative methods (drawings, paintings, ‎pick-a-card, LEGO models, etc.).‎ We will discuss what creativity is, why we should be creative in research and how we can ‎introduce creativity and creative methods in our existing paradigms and methods. In ‎breakout groups, delegates experience and actively experiment with "diamond 9" and ‎‎"pick a card" activities, and representations through objects as examples for photo ‎elicitation, and the process of building models and creating representations. These ‎activities and methods have been found particularly helpful in yielding rich qualitative ‎data and thus provide a deeper insight into research participants' experiences. Using the ‎real data from the activities we then explore how analysis of "messy data" can be ‎approached. We consider the principles and process of analysis within qualitative ‎research. We discuss the following questions: Is analysis ever an objective process? Is ‎there a difference between analyzing data from linear texts or visual/sensory data, such ‎as that from building LEGO models, song lists, photographs, videos and the like? How ‎can visual/sensory data be analyzed?‎

Return to Top
 
Short Course 6

Identifying and Correcting Errors in Big Data


Presenter: Ashley Amaya
Time: Wednesday, May 5, 2:00 PM - 5:30 PM Eastern Time


While Big Data offers a potentially less expensive, less burdensome, and more timely ‎alternative to survey data for producing a variety of statistics, it is not without error. But, ‎the construction of, access to, and overall data structure between of Big Data make it ‎difficult to know where to start looking for errors and even more difficult to account or ‎correct for them.  In this course, we will walk through the Total Error Framework, an ‎extension of the Total Survey Error framework, which can be applied to all types of Big ‎Data and can serve as a template for researchers to investigate error in Big Data.  We ‎will walk through several examples of error and map it onto the framework and provide ‎exercises for participants to come up with their own examples.  Finally, we will walk ‎through some best practices in determining whether the use of Big Data is a ‘good’ ‎choice for various research objectives, how to correct or avoid errors in Big Data, and ‎documenting the strengths and weaknesses of your Big Data source.‎

Return to Top
 
Short Course 7

Using (Non-)Probability Sample Surveys for Public Opinion Research


Presenter: Carina Cornesse
Time: Thursday, May 6, 10:00 AM - 1:30 PM Eastern Time


For many decades, public opinion researchers have almost exclusively relied on ‎probability sample surveys when aiming to draw inferences to the general population. ‎However, probability sample surveys are expensive and data collection is often slow. ‎With the rise of the internet in the 21st century, therefore, it became popular to conduct ‎fast and cheap surveys via online panels, which usually rely on web-recruited ‎nonprobability samples. In academic circles, this has led to the reignition of an old ‎debate about whether and under which conditions data from nonprobability sample ‎surveys can produce accurate population estimates. This debate is ongoing and ‎concerns many areas of public opinion research, most prominently the field of election ‎polling. This short course presents the arguments raised in the debate and summarizes ‎the empirical evidence that has been accumulated so far. The short course thus focuses ‎on providing the necessary context that public opinion researchers and survey ‎practitioners need to participate in the debate. Moreover, the short course provides ‎hands-on advice on the conditions under which nonprobability samples may be suitable ‎to answer a particular research question (i.e. “fit-for-purpose” designs) and when it may ‎be necessary to rely on probability samples instead.‎

Return to Top
 
Short Course 8

Tidy Survey Analysis in R using the srvyr Package


Presenters: Stephanie Zimmer and Rebecca Powell
Time: Thursday, May 6, 2:00 PM - 5:30 PM Eastern Time


This course will provide an in-depth introduction to survey analysis in R. We will primarily ‎discuss the R packages `srvyr` and `survey` which allow for analysis of complex survey ‎data using Taylor’s Series Estimation or replicate weights for estimation. This will be an ‎interactive class with time for hands-on practice using public use files of common survey ‎data. We will introduce how to specify the sampling design and how to do basic analyses ‎including estimating means, proportions, totals, t-tests, and regressions. This class is ‎appropriate for R users who know the basics of the `tidyverse` including the `mutate`, ‎‎`group_by`, `summarize`, and pipe (`%>%`) functions. We will provide code for all ‎examples in the course including exercises to do on your own and their solutions. ‎

Return to Top


Sustaining Sponsors

Westat

Platinum Sponsors

ReconMR

Gold Sponsors

Abt Associates
D3 Systems, Inc.
Marketing Systems Group

Silver Sponsors

EdChoice
Ipsos Public Affairs, LLC
Ironwood Insights Group, LLC
Voxco

Notice to Federal Employees

The Annual AAPOR Conference conforms to the OPM definition of a “developmental assignment.” It is intended for educational purposes; over three quarters of time schedule is for planned, organized exchange of information between presenters and audience, thereby qualifying under section 4101 of title 5, United States Code as a training activity. The AAPOR Conference is a collaboration in the scientific community, whose objectives are to provide a training opportunity to attendees; teach the latest methodology and approaches to survey research best practices; make each attendee a better survey researcher, and; maintain and improve professional survey competency.