AAPOR
The leading association
of public opinion and
survey research professionals
American Association for Public Opinion Research

Conference Program/Short Courses

 

AAPOR 74th Annual Conference Program

View the Conference Program in a downloadable PDF format here.
 

Short Courses

The 2019 Conference will include eight short courses to enhance your learning experience. These in-depth, half-day courses are taught by well-known experts in the survey research field and cover topics that affect our ever-changing industry. The 2019 Short Courses include:

Course 1: Fundamentals! Learning the Basics of Qualitative Data Analysis 
Course 2: Advances in Address Based Sampling
Course 3: Augmenting Surveys With Data From Smartphone Sensors and Apps: Best practices
Course 4: The World-Wide Challenge of Developing Effective Web-Push Survey Methods 
Course 5: Adaptive Survey Design
Course 6: Interactive Survey Data: Creating Captivating Data Visualizations in R Shiny
Course 7: Cognition, Communication, and Self-Report Across Cultures 
Course 8: Let's Learn about (Machine) Learning! An Introduction to Machine Learning for Survey Researchers
 
Short Course Details
Course 1
Title: Fundamentals! Learning the Basics of Qualitative Data Analysis 
Date: Wednesday, May 15, 2:30 p.m. – 6:00 p.m.
This course is for AAPOR attendees who have little training in qualitative research, but who want to learn how qualitative data can be analyzed and made meaningful to policymakers. The course will combine lecture and interactive formats to cover the following: (1) Why Do Qualitative Research? Participants will be introduced to the assumptions underlying qualitative research, including when qualitative methods make sense, the kinds of data that are collected, and why qualitative research cannot be assessed by quantitative benchmarks; (2) Where Do I Start? Qualitative studies often produce a volume of data that can overwhelm analysts. In this hands-on part of the course, participants will learn how to reduce that volume into something more manageable and will learn about data coding. 3) What Does It Mean? Next, participants will have an opportunity to analyze a small dataset. The instructor will discuss the differences between descriptive (what) and explanatory (how) analysis. (4) Now What? The final part of the course will review strategies for reporting the analytic results, including whether or not to use qualitative analysis software. Participants will learn how to identify meaningful findings and how to use evidence to support their conclusions.  

Instructor:
Cynthia Robins, PhD, is a cultural anthropologist who has conducted applied qualitative research at Westat for almost 2 decades. Over the years, she has trained colleagues in both industry and academia on qualitative data collection techniques (e.g., focus groups, in-depth interviews, participant observation), analysis, and reporting. She also provides training and consultation at Westat on the role of software in qualitative analysis. Dr. Robins’ recent publications include a chapter on qualitative methods in the Springer Link Health Services Research series (Methods volume) and an article in Qualitative Inquiry on using NVivo in large-scale studies.

Course Objectives:

  • Participants will learn when a qualitative study is appropriate and what it can and cannot tell the researchers.
  • Attendees will learn the basics of how to do qualitative analysis, from reducing the data to analyzing its meaning.
  • Participants will learn strategies for reporting their results to their audiences, including the decision of whether or not to use qualitative analysis software.

Who Should Attend:
This course is geared towards early- to mid-career professionals who may be working on qualitative studies, but who have had little or no training in qualitative research. The course will have particular relevance for those who may be tasked with writing up study results, but who have never previously analyzed qualitative data. 

Return to Top
 
Course 2
Title: Advances in Address Based Sampling
Date: Wednesday, May 15, 2:30 p.m. – 6:00 p.m. 
Course Overview:
Over the past decade, address based sampling (ABS) has gained popularity as a collection of methodologies that may be used for constructing household sampling frames or for administering surveys.  When ABS first emerged, little was known about the quality of ABS frames or about the best data collection methods to use with ABS. Additionally, over that same time period, gains in Internet penetration and in the use of smartphones have had important effects on the administration of general population surveys.  This course covers advances in ABS that have occurred since its early applications. While the focus of the course will be on current best practices, a historical perspective will be given to illuminate the evolution of methods. The advances covered in the course include advances in the construction of sampling frames, in the use of appended data for sampling or for nonresponse adjustment, and in data collection methods.

Instructor:
Dr. Jill DeMatteis 
is an Associate Director of Westat's Statistical Staff and a senior statistician with 27 years of experience in survey statistics, including sample design and weighting, imputation, and variance estimation. Her particular areas of interest include address-based sampling, coverage and nonresponse error, and sample design and estimation for longitudinal surveys. Dr. DeMatteis was a member of the AAPOR Address-Based Sampling Task Force. She is a Fellow of the American Statistical Association, an Elected Member of the International Statistical Institute, and a Research Associate Professor in the Joint Program in Survey Methodology at the University of Maryland.
      
Course Objectives:

  • Provide a basic understanding of when ABS should and should not be considered for a particular study.
  • Provide details on ABS sampling frames including an overview of what is known about the quality of the frames.
  • Discuss data collection methods that may be used in conjunction with ABS including a review of studies involving the application of various methods.

Who Should Attend:
This course is aimed at survey researchers, statisticians, and project managers with some experience designing and administering surveys who are considering, using, or planning to use ABS. It assumes no prior experience with ABS but will review current research and best practices, such that it should also prove useful to those with ABS experience. 

Return to Top
 
Course 3
Title: Augmenting Surveys With Data From Smartphone Sensors and Apps: Best practices
Date: Wednesday, May 15, 2:30 p.m. – 6:00 p.m. 
Course Overview:
Smartphone sensors (e.g., GPS, camera, accelerometer) and apps allow researchers to collect rich behavioral data, potentially with less measurement error and lower respondent burden than self-reports through surveys. Passive mobile data collection (e.g., location tracking, call logs, browsing history) and respondents performing additional tasks on smartphones (e.g., taking pictures, scanning

receipts) can augment or replace self-reports. However, there are multiple challenges to collecting these data: participant selectivity, (non)willingness to provide sensor data or perform additional tasks, ethical issues, privacy concerns, usefulness of these data, and practical issues of in-browser measurement and app development. This course will address these challenges by reviewing state-of-the-art practices of smartphone sensor data collection, ranging from small-scale studies of hard-to-reach populations to large-scale studies to produce official statistics, and discuss design best-practices for sensor measurement. Recommendations provided will include:

  •  What research questions can be answered using smartphone sensors and apps?
  •  What are participants’ concerns and how to address them?
  •  How to ask for consent for sensor measurements and ensure participation?

This course will discuss methods of assessing data quality and touch upon the analysis of passively collected data. The course will not provide analytic methods for “found” data nor demonstrate how to program smartphone sensor apps.

Instructors:
Bella Struminskaya is an Assistant Professor of Methods and Statistics at Utrecht University. Her research focuses on the design and implementation of online, mixed-mode and smartphone surveys, and passive data collection. She has published on data quality, nonresponse and measurement error, panel conditioning, device effects, and smartphone sensor measurement.
 
Florian Keusch is Assistant Professor at the University of Mannheim and Adjunct Assistant Professor at the University of Maryland. He serves on the Faculty Board of the International Program in Survey and Data Science. His research focuses on, among other topics, errors in (mobile) web surveys and passive mobile data collection.
 
Course Objectives:
By the end of the course participants will:

  • know what smartphone sensors are available and what they can measure to facilitate and enhance surveys
  • be able to identify potential applications of smartphone sensor measurement for their own data collection
  • be able to anticipate practical issues when implementing smartphone sensor data collection

Who Should Attend:
The course is intended for survey practitioners, researchers, or students who want a practical introduction to smartphone sensor-based research. No prior knowledge of smartphone sensors is required, but a basic understanding of survey practice and survey errors is helpful.

Return to Top
 
Course 4
Title: The World-Wide Challenge of Developing Effective Web-Push Survey Methods 
Date: Thursday, May 16,  8:00 a.m. – 11:30 a.m. 
Course Overview:
Web-push surveys that start with a postal request to respond over the Internet, with follow-up requests for non-respondents to answer by mail, phone or in-person, are rapidly replacing interview only surveys in countries throughout the world.  This trend is encouraged by the development of address-based sampling that provides improved household coverage. It is further encouraged by research findings that show mixed-mode designs can produce higher response rates at lower costs. This presentation will describe the reasons for the increased use of web-push methods, followed by a discussion of the significant new challenges that web-push methods present to survey methodologists, drawing examples from countries throughout the world. The short course will also draw extensively on research conducted by the presenter on improving response to web-push surveys and the development of more effective communications with sampled households and individuals.

Instructor:
Don A. Dillman 
is Regents Professor of Sociology at Washington State University, (Pullman, Washington, U.S.A.), where for the last ten years he has conducted extensive research on the development and use of web-push methodologies. His well-known text, now in its 4th edition (Internet, Phone, Mail and Mixed-Mode Surveys: The Tailored Design Method, with Jolene Smyth and Leah Christian, 2014) has provided guidance for designing probability surveys for nearly 40 years.  Dillman served as the Senior Survey Methodologist at the U.S. Bureau of the Census from 1991-1995, and was the 2001-2002 President of the American Association for Public Opinion Research.

Course Objectives:

  • Describe why effective web-push methods are needed for survey research.
  • Discuss factors that influence whether people will respond over the Internet or my other survey modes.
  • Discuss needed research for developing more effective web-push survey designs.

Who Should Attend:
Professionals interested in learning whether web-push methodologies would be useful in their work. Surveyors who work for government, non-profit, university or commercial organizations. There are no prerequisites for this short course.

Return to Top
 
Course 5
Title: Adaptive Survey Design
Date: Thursday, May 16,  8:00 a.m. – 11:30 a.m.
Course Overview:
Many statistical agencies and survey organizations are looking for design options that control costs and errors. This situation has led to a growing interest in adaptive survey designs. Adaptive survey designs are based on the rationale that any population is heterogeneous in both its response and answering behavior to surveys and in its costs to be recruited and interviewed. Different survey design features may be effective for different members of the population. Adaptive survey designs acknowledge these differences by allowing differentiation of survey design features for different population subgroups based on auxiliary data about the sample; the auxiliary data is linked from frame data, registry data or paradata. The strata receive different treatments. This course will focus on practical guidance for building adaptive survey designs, including identification of strata, choice of strategies, and optimization of design features across strata.

Instructor:
James Wagner is a Research Associate Professor at the University of Michigan's Survey Research Center. His research is in the area of nonresponse. He is interested in the use of responsive and adaptive survey design techniques aimed at improving the quality of survey data. He is also interested in indicators for the quality of survey data, especially proxy indicators for the risk of nonresponse bias. He has published articles in a variety of journals including Public Opinion Quarterly, Statistics in Medicine, the Journal of Official Statistics, and others.

Course Objectives:

  • Understand what constitutes an Adaptive Survey Design
  • Gain knowledge about the practical steps required to implement an Adaptive Survey Design
  • Learn options for stratification, design features, and quality indicators that serve as the key inputs for an Adaptive Survey Design

Who Should Attend:
This course is aimed at survey professionals and methodologists who are interested in learning about adaptive survey designs. The course presumes some basic knowledge of survey methods. 

Return to Top

 
Course 6
Title: Interactive Survey Data: Creating Captivating Data Visualizations in R Shiny
Date: Thursday, May 16,  8:00 a.m. – 11:30 a.m.
Course Overview:
R Shiny provides a powerful tool for creating visualizations. Rather than simply creating a static plot, map, or table, Shiny allows for dynamic visualizations that users can interact with. From filtering the data to different demographics, to viewing the data under different weighting schemes, Shiny provides the ability to let users explore survey data in new ways. This course will focus on teaching participants how to create interactive dashboards and visualizations in Shiny. The hands-on course will walk through the process of creating a Shiny app, adding more advanced interactive features, and the options for deploying the app so it can be accessed by people both inside and outside of an organization. Participants will be provided with R scripts and sample data to step through the app creation process as we work through the material. At the end of the short course, participants will walk away with the knowledge to create these same types of Shiny apps using their own data. 

Instructors:
Jack Chen 
completed his Ph.D. in survey methodology at the University of Michigan. At SurveyMonkey, he leads the data efforts for the survey research team in developing new sampling strategies for online respondents and creating tools to analyze survey data.  
 
Reuben McCreanor completed his master's in Statistics at Duke University. As a skilled developer in R and Python, he is in a unique position to find new techniques for analyzing, visualizing, and interacting with survey data.

Course Objectives:

  • Creating a basic visualization in R Shiny
  • Adding more advanced features to let users interact with the data
  • Deploying your new visualization internally or publishing it online

Who Should Attend:
Participants need to have some knowledge of R or a similar programming language like Python or MATLAB. You don't need to be an advanced user, but should at least be able to load in and clean data, and create a simple plot.

Return to Top
 
Course 7
Title: Cognition, Communication, and Self-Report Across Cultures 
Date: Sunday, May 19, 9:00 a.m. – 12:30 p.m. 
Course Overview:
Cross-cultural surveys as well as surveys within culturally diverse countries pose challenges that go beyond the usual complexities of the question answering process. We identify key issues, illustrate them with select examples, and highlight the underlying processes. First, given culture-specific knowledge and meanings, a given term may elicit different associations in different cultures, even when translation procedures do not identify a problem. Second, given culture-universal themes (e.g., individualism, collectivism, honor, tightness-looseness) cultures differ in their norms of communication and their sensitivity to context. This can result in different interpretations of question sequences, even when each question in isolation is understood similarly. Third, given both these culture-specific and culture-universal issues, cultures differ in what their members need to attend to, resulting in different memories for similar events. Fourth, questions within the survey itself can trigger mental procedures, including more or less analytic vs. heuristic reasoning, that can influence responses, especially when the questions test cognitive aptitude and the answers require effort. We report on basic research in these domains and discuss survey examples and implications. 
.
Instructors:
Daphna Oyserman 
(Dean’s Professor of Psychology, Communication, Education) co-directs the Mind & Society Center at the University of Southern California. Her research addresses the situated and context sensitive nature of culture, identity, and motivation. She starts with the idea that thinking is for doing; people do not think outside of contexts, they think about themselves and their possibilities in terms of what seems relevant in context. With funding from NIH, DOE, and the Templeton Foundation, she focuses on cognitive and behavioral consequences of identity-based motivation and culture-as-situated cognition. Her meta-analytic papers about culture have received Web of Science impact awards.

Norbert Schwarz (Provost Professor of Psychology and Marketing) co-directs the Mind & Society Center at the University of Southern California. His research addresses the situated and context sensitive nature of human judgment and its implications for the methodology of social science research. He is a member of the American Academy of Arts and Sciences and the German National Academy of Science and has received scientific contribution awards in social, cognitive, and consumer psychology and an AAPOR Book Award for Thinking About Answers, co-authored with Seymour Sudman and Norman Bradburn. 

Course Objectives:

  • Learn about basic cultural differences in cognition and communication.
  • Objective 2: Consider their implications for the question answering process.
  • Objective 3: Increase your awareness of potential problems at the questionnaire development stage.

Who Should Attend:
Researchers conducting cross-cultural studies or studies with culturally diverse populations.  

Return to Top

Course 8
Title: Let's Learn about (Machine) Learning! An Introduction to Machine Learning for Survey Researchers 
Date: Sunday, May 19, 9:00 a.m. – 12:30 p.m. 
Course Overview:
Decision trees, random forests, and neural networks?  Deep learning, image recognition, and text classification? Have you heard any these terms mentioned and wondered what they are and how they work?  They are all related to machine learning, which is increasingly being applied to support survey research in a wide range of tasks -- from constructing sample frames to predicting survey response propensities, from modeling interviewer and respondent behaviors to adaptive survey design, and from automating behavior coding to open-ended response coding.  Being conversant in machine learning and understanding some of its basic (and advanced) concepts are useful tools for today's survey researcher.  In this short course, we will provide a gentle introduction to a breadth of topics in machine learning, illustrated through applications in survey research.  In particular, we will introduce several different types of machine learning models, provide some intuition into how they work and to what types of problems they could be applied, and demonstrate how they can be employed using popular software tools (including R statistical packages and Python).  As a result of this short course, participants will gain or increase their understanding of machine learning and its potential to aid survey research.
.
Instructor:
Adam Eck, PhD,
 is an Assistant Professor of Computer Science at Oberlin College.  Adam's teaching and research interests include intelligent agents, machine learning, data science (including Survey Informatics) and their application to real-world problems.  He has previously co-taught both an AAPOR short course and Portal Panel.

Course Objectives:

  • Introduce ideas and terminology from machine learning (and relate them to concepts in statistics and the social sciences)
  • Highlight the advantages and disadvantages of a range of machine learning methods to help participants know when each might be applicable for their work
  • Demonstrate the use of machine learning in survey research-themed applications employing common, free software tools (R and Python)

Who Should Attend:
Anyone interested in the exploring how machine learning works and how it might help with a range of survey research tasks and applications from sample frame construction to data analysis are strongly encouraged to attend.  No prior knowledge of machine learning is expected, although researchers with some prior background should hopefully still find the course useful!  

Return to Top


Sustaining Sponsors

NORC
Westat

Platinum Sponsors

Headway in Research
ICF
RTI
SSRS

Gold Sponsors

ABT Associates
Civis Analytics
confirmit
Langer Research
Marketing Systems Group

Silver Sponsors

DataForce
Dynata
The Logit Group
Oxford University Press
Qualtrics
UCONN
Voxco

Bronze Sponsors

D3
EdChoice
International Experience Canada
Nielsen

Exhibitors

Adapt, Inc.
American Association of Nurse Practitioners
American Institutes for Research (AIR)
ASDE Survey Sampler
Azure
Canadian Viewpoint
College Pulse
Data Independence
Data Recognition Corporation
G3 Translate
I/H/R Research Group
ICPSR
IMPAQ
IPSOS
Ironwood
Market Xcel Data Matrix Pvt. Ltd.
Mathematica
Michigan Program in Survey Methodology
NPC
Opinion Access LLC
Random Dynamic Resources
Reconnaissance Market Research
Roper Center
Scientific Telephone Samples
Scoutsuite
Streamworks
Swift Prepaid Solutions
University of Maryland, Joint Program in Survey Methodology

Notice to Federal Employees

The Annual AAPOR Conference conforms to the OPM definition of a “developmental assignment.” It is intended for educational purposes; over three quarters of time schedule is for planned, organized exchange of information between presenters and audience, thereby qualifying under section 4101 of title 5, United States Code as a training activity. The AAPOR Conference is a collaboration in the scientific community, whose objectives are to provide a training opportunity to attendees; teach the latest methodology and approaches to survey research best practices; make each attendee a better survey researcher, and; maintain and improve professional survey competency.