The leading association
of public opinion and
survey research professionals
American Association for Public Opinion Research

Short Courses

The 2020 Conference will include eight short courses to enhance your learning experience. These in-depth, half-day courses are taught by well-known experts in the survey research field and cover topics that affect our ever-changing industry. The 2020 Short Courses include:

Course 1: Quick Lessons for Communicating and Visualizing Quantitative Information
Course 2: Text Messaging​ for Conducting Survey Interviews
Course 3: Cognitive Interviewing and Psychometrics for Improving Validity of Survey Questions
Course 4: Employing text analytics for survey data 
Course 5: Biosocial Data Collection and Analysis
Course 6: Doing Reproducible Research
Course 7: A Practical Introduction to Using Voter Registration Databases for Survey Research: From Sampling to Hybrid Estimation 
Course 8: Essential Tools for Working in R for Public Opinion and Survey Researchers
Short Course Details
Course 1
Title: Quick Lessons for Communicating and Visualizing Quantitative Information  
Date: Wednesday, May 13, 2:30 p.m. – 6:00 p.m.
Communicating quantitative research is hard: somehow we need to quickly fit a mass of complex, abstract information into other people’s brains using just words, colors, and shapes — all while keeping our audience awake and interested. In fact, it’s rather remarkable that we ever manage to achieve this at all, and not terribly surprising that much of our data communication attempts are destined just to sow more confusion or frustration. 

But it can be better. Happily, we can borrow principles from other fields (cognitive psychology, psychophysics of perception, science communication, and linguistics, for example) to take a very human-centered approach to quantitative communication. We can use tips and strategies to make our communications experience smoother and our products more understandable and engaging. In this hands-on workshop we will reverse-engineer examples of quantitative communication to see how they work (or why they don’t), practice applying those principles in real situations, and learn how to troubleshoot and improve our own written and graphical communications. This is a course on communication, not grammar; non-native English users and those with recurring nightmares from freshman English are welcome.

Regina Nuzzo, Ph.D., is the Senior Advisor for Statistics Communication at the American Statistical Association. She has a doctorate in statistics from Stanford University and graduate science writing training from the University of California Santa Cruz. Her writing has appeared in the Los Angeles Times, New York Times, Scientific American, Science News, New Scientist, and Nature, among others.

Return to Top
Course 2
Title: Text Messaging for Conducting Survey Interviews
Date: Wednesday, May 13, 2:30 p.m. – 6:00 p.m. 
Course Overview:
Text messaging is an emerging option for survey researchers. This short course presents recent findings and emerging practices about inviting participants to complete a questionnaire (via text message or another mode) and asking survey questions and collecting answers via text messages. Text messaging has particular qualities that distinguish it from other survey modes and that provide particular advantages for respondents and researchers but also new challenges.  The short course first focuses on experimental evidence on data quality and the nature of the interaction in text messaging interviews, as well as on efficiency of texting: the number of attempts required to contact sample members, the amount of time required to complete the sample, the possibility of conducting multiple text interviews simultaneously, and the benefits of automated vs. human-administered texting. The course then focuses on practical aspects of implementing text messaging in the survey process, including designing for respondents whose mobile phones are not smartphones or whose network connections are not ideal, whether to allow free-text responses or only single-character responses, and how many questions can realistically be asked via text message. Finally, we discuss regulation and privacy concerns (e.g., compliance with the GDPR and the US Telephone Consumer Protection Act [TCPA]).

Frederick Conrad, Andrew Hupp, and Michael Schober
have conducted joint research on new survey modes such as text message interviews and video-mediated interviews.

Conrad is a Research Professor at the University of Michigan’s Institute for Social Research, where he directs the Program in Survey Methodology. His research generally concerns the reduction of survey measurement error.

Hupp is a Survey Specialist Senior in the Survey Research Center at the University of Michigan. His research interests include designing and implementing systems for mixed mode data collections, incorporating newer modes, understanding the impact these modes have on the quality of data.

Schober is Professor of Psychology and Vice Provost for Research at The New School in New York City. His survey methodology research examines interviewer-respondent interaction, respondent comprehension, and how existing communication modes not yet widely used for survey data collection might affect data quality.

Return to Top
Course 3
Title: Cognitive Interviewing and Psychometrics for Improving Validity of Survey Questions
Date: Wednesday, May 13, 2:30 p.m. – 6:00 p.m. 
Course Overview:
There is a wide consensus about how elusive to validate survey questions can be for national and international survey projects. New approaches to such a difficult problem advocate for integrating qualitative methods like Cognitive Interviewing (CI) with quantitative results like those provided by psychometrics. The aim of the short course is to present a practical, comprehensive approach to integrate CI and psychometrics in mixed-methods pretesting projects of survey questions. Course attendees will learn how to plan a mixed-methods CI and psychometric study: designs, materials (interviews protocols, templates for analyses, software, etc.). I will also teach how to integrate and report qualitative findings from CI with quantitative results obtained by psychometrics. Practical examples of mixed-methods DIF studies will be analyzed using data bases of national and international surveys, and findings of CI pretesting studies. Finally, the general structure to build validity arguments of the intended interpretations for survey questions, how to communicate results to survey managers, clients, journal articles, etc., and the implications of the mixed methods pretesting results for survey errors will be also discussed.

Jose-Luis Padilla
is a Professor at the Department of Methodology of Behavioral Sciences at the University of Granada (Spain). His current research focuses on cognitive interviewing, web probing, psychometrics, validity, and mixed methods. He is co-authors of articles and book chapters on cognitive interviewing and psychometrics, and currently is serving as co-editor of Methodology, the official journal of the European Association of Methodology. Spanish Statistics commissioned him to conduct survey questionnaire adaptations and pretesting. He is also member of QUEST group and served in the organizing and publication committees of the QDET2 conference.

Return to Top
Course 4
Title: Employing text analytics for survey data
Date: Thursday, May 14,  8:00 a.m. – 11:30 a.m. 
Course Overview:
In this short course, participants will learn how to produce informative analyses from the unstructured text of survey responses. We begin with a primer on the theory of content analysis, and how computers scale content analysis using the techniques of natural language processing. Then, we review common tools for the analysis of text data, including bag-of-words analysis, n-grams, readability, “keyness” or likelihood-ratio statistics, word clouds, topic modeling, and how to relate these text-based derivatives to other survey quantities of interest through regression. Finally, we hold a “hands-on” colloquium where attendees have the opportunity to apply these techniques to answer questions of substantive interest in politics: we analyze responses from a nationally representative sample of voters polled in the lead-up to the 2016 election, which has been used in the past to study the phenomenon of affective polarization. The course will be taught in R, at a beginner-to-moderate level.

Joe Sutherland
is an expert in the application of text analytics to the social sciences, and has used his experience to lead several groundbreaking data science initiatives for a diverse clientele of Fortune 500 clients, including Coca-Cola, GlaxoSmithKline, UPMC, JPMorganChase, Goldman Sachs, and Marriott. His career has spanned numerous technical and operational roles at venues including The White House, Columbia and Princeton. His academic research is published in top peer-reviewed outlets and received a citation from the National Science Foundation in 2017. Presently, he is a Research Fellow at Johns Hopkins. 

Return to Top
Course 5
Title: Biosocial Data Collection and Analysis
Date: Thursday, May 14,  8:00 a.m. – 11:30 a.m.
Course Overview:
Over the last decade there has been a rapid increase in the collection and availability of biological data collected as part of larger investigations into biosocial influences of health and behavior. However, the vast majority of the biological data collected to date comes from convenience samples in clinics and hospitals. Further, as more population-based studies have started collecting biological samples, protocols for moving collections from the lab to the field have not been examined for their effects on potential bias. Finally, several statistical techniques more common in the social sciences (e.g. statistical weights, multiple imputation, etc.) are rarely used in biosocial work.
The purpose of this course will be to familiarize survey methodologists with the collection, availability, and analysis of biological data collected in health and social studies. Hands-on experience with collection techniques paired with lectures, will provide an introductory knowledge of the field. Key goals will be to spur insight and possible examination into the total survey error surrounding biological data collection and analysis. Discussion of the limited survey methodological work in this area will be integrated into an overview of the entire collection to analysis process and existing gaps survey methodologists can address will be highlighted.

Colter Mitchell
is a Research Assistant professor in the Population, Neurodevelopment, and Genetics program at the Survey Research Center, Institute for Social Research, University of Michigan, co-Director of the ISR Biospecimen lab, co-director of the Summer Course Genomics for Social Sciences, and Associate Director of the Biomethods Collaborative. He is the PI of the biological data collections of the Fragile Families and Child Wellbeing, and works or consults on biological data collection in several other large population-based studies. His methodological work emphasizes data collection practices in child and under-represented minority populations and improving sociogenomic data analytic pipelines. 

Jessica Faul is an Associate Research Scientist in the Survey Research Center, Institute for Social Research, University of Michigan, co-Director of the ISR Biospecimen lab, co-director of the Summer Course Genomics for Social Sciences, and a Co-PI of the Health and Retirement Study. She is the Co-PI responsible for all HRS biological data collection. Dr. Faul consults on all other HRS-sister study biospecimen collection and analysis, in addition to her work consulting with scores of studies through the ISR Biospecimen lab. A major focus of her methodological research involves experimental examination of field-based biospecimen collection in aging populations.

Return to Top

Course 6
Title: Doing Reproducible Research
Date: Thursday, May 14,  8:00 a.m. – 11:30 a.m.
Course Overview:Recent years have seen an increase in the amount and complexity of data available in the social sciences. At the same time, the social sciences are facing a reproducibility crisis as previous findings often fail to replicate. Both of these trends highlight the need for improving reproducibility and collaboration practices. This is especially important as reproducible research practices are rarely covered in traditional academic training.
In this course, we will cover the main concepts used in reproducible research as well as the best practices in the field. After a general introduction we will cover some of the tools that researchers can use to help them in this process. More precisely, you will learn how Github and Rstudio projects can facilitate reproducibility in the popular R software. Additionally, you will get hands on experience in the creation of reproducible documents using Knitr. Lastly, you will learn how all these tools can be used together to create a reproducible research workflow.

Alexandru Cernat
is an assistant professor in social statistics at the University of Manchester and has received a PhD in survey methodology from the University of Essex. His main research area covers the estimation of measurement error and longitudinal data collection. He has also published on a number of other topics in survey methodology such as: the design of mixed mode surveys, collection of new forms of data, interviewer effects and web survey design. You can read more about him at: www.alexcernat.com

Return to Top
Course 7
Title: A Practical Introduction to Using Voter Registration Databases for Survey Research: From Sampling to Hybrid Estimation 
Date: Sunday, May 17, 9:30 a.m. – 1:00 p.m. 
Course Overview:
Advances in technology, computation, and the availability of administrative data have revolutionized US elections. Voter registration databases containing data on party affiliation, vote history, and race (among other fields) alongside contact information, have allowed for innovations in voter contact as well as opinion research. Registration based frames have become increasingly popular in election polling for facilitating the analysis of non-response, likely voter modeling, and more. In fact, an increasing number of surveys conducted in 2018 in sub-national geographies like congressional districts were sourced from voter files.  Using data from a commercial voter file vendor, we will provide students a thorough introduction to the world of polling using voter file data taken from rigorous research conducted by the presenters, academics, and practitioners that they have learned over the years as well as providing practical recommendations and experience on how to use modern computing resources to work with these data.
Masahiko Aida
is a Principal Survey Scientist at Civis Analytics. He has over a decade of experience as an innovator in political polling and voter and customer targeting, overseeing survey research methodology and new developments. He enabled clients to use survey research with the voter list in US campaigns of many levels and overseas.

Jonathan Robinson is a Lead Research Scientist at Catalist, where he specializes in using the tools of computational social science to help solve practical problems for progressive political organizations, which include the applications of voter files for survey research and other aspects of strategic decision making. 

Return to Top

Course 8
Title: Essential Tools for Working in R for Public Opinion and Survey Researchers 
Date: Sunday, May 17, 9:30 a.m. – 1:00 p.m. 
Course Overview:
This proposed short course will provide an up-to-date overview of essential tools for working in R for public opinion and survey researchers. The target audience includes AAPOR conference attendees with R experience and attendees who have never used R. The short course will be presented in a very hands-on fashion, and registered participants will get electronic instructions for installing and starting the R Studio software on their personal laptops prior to the short course. Topics that will be covered include using the tidyverse for data management and cleaning, using R markdown to document code and results for collaborators, computing and using survey weights, survey data analysis using the survey package, and state-of-the-art tools for data visualization. Participants will also be provided with electronic versions of annotated, working code and data sets prior to the short course, enabling participants to easily follow along without making errors in typing code during the short course.
Brady T. West
is a Research Associate Professor in the Survey Research Center at the Institute for Social Research on the University of Michigan-Ann Arbor (U-M) campus. His current research interests include the implications of measurement error in auxiliary variables and survey paradata for survey estimation, survey nonresponse, interviewer effects, and multilevel regression models for clustered and longitudinal data. He has co-authored books comparing statistical software packages in terms of their mixed-effects modeling procedures (Linear Mixed Models: A Practical Guide using Statistical Software, Second Edition, 2014) and on survey data analysis (Applied Survey Data Analysis, Second Edition, 2017).

Return to Top

Sustaining Sponsors

NORC at the University of Chicago
Ipsos Public Affairs, LLC

Platinum Sponsors

Headway in Research

Gold Sponsors

Marketing Systems Group
ABT Associates
The Logit Group

Silver Sponsors

Data Independence
Lucid Holding LLC

Bronze Sponsors

Adapt, Inc.
Reconnaissance Market Research
Social Data Science Center

Conference Supporters

Precision Opinion


StataCorp LLC
ASDE Survey Sampler
Roper Center
Michigan Program in Survey Methodology
Oxford University Press

Notice to Federal Employees

The Annual AAPOR Conference conforms to the OPM definition of a “developmental assignment.” It is intended for educational purposes; over three quarters of time schedule is for planned, organized exchange of information between presenters and audience, thereby qualifying under section 4101 of title 5, United States Code as a training activity. The AAPOR Conference is a collaboration in the scientific community, whose objectives are to provide a training opportunity to attendees; teach the latest methodology and approaches to survey research best practices; make each attendee a better survey researcher, and; maintain and improve professional survey competency.