Short Courses

Seven short courses are offered to enhance your learning experience. These in-depth courses are taught by well-known experts in the survey research field and cover topics that affect our ever-changing industry. Make the most of your time at the conference and plan to participate in the following courses:

Course 1:   Going Mobile with Survey Research: Design, Data Collection, Sampling
                     and Recruitment Considerations for Smartphone and Tablet-Based Surveys
Course 2:   Cognitive Interviewing
Course 3:   Multilevel Modeling with Complex Sample Survey Data
Course 4:   The Use of Paradata to Model Response Propensities and Inform Responsive
                     Design Decisions
Course 5:   Digital Research: Methodological Best Practices
Course 6:   Conducting Better Mixed-Mode Surveys
Course 7:   Total Survey Error in Project Management

Course 1

Going Mobile with Survey Research:
Design, Data Collection, Sampling and Recruitment Considerations for Smartphone and Tablet-Based Surveys

Wednesday, May 14, 2:30 p.m. – 6:00 p.m.

Course Overview:
Nearly two in every three new cell phone purchases is a smartphone and current estimates posit that the overall penetration of these smart devices in the U.S. hovers at just over 60%. While these “smart” cell phones and tablet computers offer survey researchers unprecedented opportunities for data collection, using multiple modes within a single device, surveys specific to smartphones require special considerations that account for rendering, form factor and technologies that are native to these devices. To date, these considerations have been the exception rather than the rule in many cases.

This course explores the main frameworks for collecting survey data, including apps, mobile-optimized and app-like surveys, and details current approaches for recruiting survey respondents for completing surveys on these modes. We also provide emerging best practices/considerations for smartphone survey esigns, and discuss how to use key paradata for optimizing smartphone surveys, and new forms of paradata that can be collected via the smartphone. We also discuss key differences between mobile optimal for smartphones and tablets and discuss when mobile optimal recommendations should be bifurcated to distinguish between smartphones and tablets. Finally, we provide a broad overview of the computer programming frameworks one might use to develop your own mobile optimal surveys.

Instructor:
Trent D. Buskirk, PhD, is Vice President of Statistics and Methodology at Marketing Systems Group. Formerly a Research Director for the Nielsen Company and Associate Professor of Biostatistics in the Department of Biostatistics of the Saint Louis University School of Public Health, Dr. Buskirk has been conducting research relating to the use of cell phones and smartphones in survey research for over 13 years. He was among the first to explore the use of text messaging as pre-alerts/invitations for cell phone surveys and among the first researchers to explore mode effects for smartphone surveys in the U.S. Dr. Buskirk recently served as the Principal Investigator on a grant-funded project for developing and deploying a smartphone survey related to the use of health apps. His research interests also include dual frame weighting for cell phone surveys, as well as mode effects related to cell phone surveys, online and in-person surveys. Dr. Buskirk’s research work has appeared in various journals including the Journal of Official Statistics, Journal of the Royal Statistical Society, Field Methods, Social Science Computer Review and Survey Practice. He is currently the Chair of the American Statistical Association’s Survey Review Committee, as well as a member of AAPOR’s Emerging Technologies Task Force.

Course Objectives:
In this short course, we won’t ask you to silence your smartphones! Rather we will explore what can happen when you turn them on and start collecting survey data.

Attendees will be able to:
• Understand the three main frameworks for deploying surveys using smartphones/tablets
• Know the ways in which survey data collection may be optimized for each type of mobile device separately
• Identify key strategies and approaches for recruiting respondents
• Follow best practices for designing smartphone and tablet surveys
• Understand the technical requirements and computing architecture available for accelerating the development, testing and deployment of smartphone and tablet surveys

Who Should Attend:
The course is geared to those involved in all facets of survey data collection and research. No prior knowledge is required, though some familiarity with sampling theory and survey design is assumed.

Back to Top

Course 2

Cognitive Interviewing
Wednesday, May 14, 2:30 p.m. – 6:00 p.m.

Course Overview:
The course is designed as an overview and introduction to cognitive testing, with an emphasis on application to pretesting survey questionnaires prior to field administration, especially for researchers having limited resources or with a need for quick turnaround of results. To this end, the instructor will emphasize basic approaches to cognitive probing techniques -- e.g., concurrent versus retrospective; and ‘proactive’ (scripted) versus ‘reactive (free-form) methods. The training will be interactive, including a demonstration and attendee practice exercise. The course will make use of case studies, articles from the survey methods literature, and the results of informal collaborations between practitioners. There will not be a heavy focus on theory or history, but the perspective taken will be interdisciplinary, taking into account contributions to cognitive testing from a number of fields other than cognitive psychology. Depending on time and participant interest, we will address issues in the field that  are particularly germane: (a) Uses of cognitive testing in an increasingly self-administered, computerized (and mobile IT) world; (b) Testing of cross-cultural and multilingual surveys; and (c) analysis procedures that maximize reliability and validity of results.

Instructor:
Gordon Willis
has 25 years of experience in designing questionnaires and developing methods for the testing and evaluation of survey questions at the National Center for Health Statistics, Research Triangle Institute and the National Cancer Institute. He has taught many short courses on cognitive interviewing techniques, and has applied these across a range of surveys, both factual/autobiographical and attitudinal. He has written a textbook on cognitive testing, and is completing another on the analysis of cognitive testing results. His current areas of research and application are extensions to computerized surveys, integration with usability testing, cross-cultural questionnaire testing and development of analysis and reporting procedures.

Course Objectives:
Attendees will learn the basics of cognitive interviewing in a way that should enable them to apply the method to their own questionnaire development projects, based on a multidisciplinary perspective that incorporates a range of elements deriving from cognitive psychology, sociology-anthropology, linguistics and computer usability.

Who Should Attend:
The course is geared to attendees who are new to cognitive interviewing, as well as questionnaire designers with some experience in cognitive or qualitative testing who are interested in a framework that widens the scope beyond the traditional attention to the cognitive aspects of the survey response process.

Back to Top

Course 3

Multilevel Modeling with Complex Sample Survey Data
Wednesday, May 14, 2:30 p.m. – 6:00 p.m.

Course Overview:
Secondary analysts of survey data arising from socalled “complex” samples, which generally feature stratified multi-stage cluster sampling with unequal selection probabilities for different sample units, are often interested in decomposing the variance in survey variables of interest across different levels of the multistage design. A common example is a multi-stage sample design featuring an initial sample of schools, with classrooms randomly sampled within schools and students randomly sampled within classrooms. Researchers may wish to examine the contributions of sampled units at different stages of the sample design (e.g., schools and classrooms) to the total variance in survey variables of interest (e.g., academic performance) in the larger target population from which the sample was drawn, and then attempt to explain that variance with covariates measured on the units at each stage. In the setting of a panel survey, researchers may wish to examine between-unit variance in trends over time within the larger clusters defining a multi-stage sample of the panel units.

Multilevel models provide researchers with flexible statistical tools that enable these types of examinations, but there are important issues that analysts need to be aware of when fitting these models to survey data from complex samples. This course will provide participants with an initial overview of design-based versus modelbased approaches to these types of investigations, and then proceed to introduce the conceptual background underlying multilevel models for complex samples. The course will then turn to several examples of fitting multilevel models to real complex sample survey data using available software, and discuss interpretation of analysis results and software options in detail.

Instructor:
Brady T. West
is a Research Assistant Professor in the Survey Methodology Program, located within the Survey Research Center at the Institute for Social  Research on the University of Michigan-Ann Arbor (UM) campus. He also serves as a Statistical Consultant at the Center for Statistical Consultation and Research (CSCAR) on the U-M campus. He earned his PhD from the Michigan Program in Survey Methodology in 2011. His current research interests include the implications of measurement error in auxiliary variables, survey nonresponse, interviewer variance and multilevel regression models for clustered and longitudinal data. He is the lead author of a book comparing different statistical software packages in terms of their mixedeffects modeling procedures (Linear Mixed Models: A Practical Guide using Statistical Software, Chapman Hall/CRC Press, 2007), with a second edition currently being written, and he is a co-author of a second book entitled Applied Survey Data Analysis (with Steven Heeringa and Pat Berglund), which was published by Chapman Hall in April 2010.

Course Objectives:
Participants will leave this course with a thorough understanding of the important conceptual issues associated with fitting multilevel models to complex sample survey data. Participants will also be introduced to several examples of using available statistical software to fit these models, enabling them to perform these analyses and interpret results on their own.

Who Should Attend:
This intermediate-level course is aimed at public opinion researchers, survey methodologists and survey analysts who frequently perform secondary analyses of complex sample survey data and occasionally apply multilevel models as a part of their research. A background in applied regression analysis and applied sampling for surveys is strongly recommended, but not required.

Back to Top

Course 4

The Use of Paradata to Model Response Propensities and Inform Responsive Design Decisions
Thursday, May 15, 8:00 a.m. – 11:30 a.m.

Course Overview:

During the last twenty years survey data have been increasingly collected through computer assisted modes. As a result, a new class of data – called paradata – is now available to survey methodologists. Typical examples are key-stroke files, capturing the navigation through the questionnaire and time stamps, providing information such as date and time of each call attempt or the length of a question-answer sequence. Other examples are interviewer observations about a sampled household or neighborhood, recordings of vocal properties of the interviewer and respondent, information about interviewers and interviewing strategies.

Recently, several national statistical institutes as well as private data collectors started modeling paradata (call record data or field process data) to systematically investigate response propensity and inform data collection in the context of responsive and adaptive survey designs. Typical questions asked in this context center on the likelihood someone will be at home given the history of prior contact attempts available to the data collector.

This course will give an overview of the various activities at the NSIs and their use of paradata. We will discuss with the help of detailed examples modeling techniques as well as challenges associated with these techniques.

Instructor:

Frauke Kreuter is Associate Professor at the Joint Program in Survey Methodology at the University of Maryland and research director of the statistical methods group at the Institute for Employment Research (IAB) in Germany. Before joining the University of Maryland she held a postdoc at the UCLA Statistics Department. She has published extensively on the use of paradata for nonresponse adjustment. Her current research interests focus on measurement error and nonresponse in social surveys, and the use of paradata in managing and improving survey data collection.

Course Objectives:
Participants should be able to understand and possibly employ modeling techniques based on paradata in order to study response propensity and incorporate these into study designs.

Who Should Attend:
The course is aimed at both producers and users of survey data, at both senior and junior levels. The course is aimed equally at researchers from academia, government and the voluntary and private sector, including market research and statistical agencies. The course is designed for researchers new to this topic as well as for those who already have experience in this area but are interested in learning more about modeling.

Back to Top

Course 5

Digital Research: Methodological Best Practices
Thursday, May 15, 8:00 a.m. – 11:30 a.m.

Course Overview:
With the majority of the U.S. population online, using digital research methodologies for data collection should be a consideration for all public opinion and survey researchers. This short course will provide an overview of the following digital research methodologies:

• Digital and Cross-Media Effectiveness
• Digital Behavioral Tracking
• Social Media Listening

The course will focus on how digital research methods can support public opinion and survey research through detailed descriptions of digital methodologies and examples for each approach. With current passive measurement and monitoring approaches, there are opportunities to collect richer and possibly more accurate data than what is possible with self-reported methods. The nuances of digital data collection methods will be explained in detail with guidance on how to develop a valid methodology.

In addition to using digital research methods to observe and measure experience, it can also be used as an input to other research techniques. The course will provide examples on how digital research can be integrated into other techniques such as surveys and focus groups. For each digital research technique presented, best practices and methodological considerations will be covered so attendees can use current projects to evaluate how they can build digital methods into their work.

Instructors:
Natasha Stevens
is Vice President – Senior Practice Leader for GfK Digital Market Intelligence (DMI). Her responsibilities include business development and designing custom research solutions that integrate GfK’s digital research assets across GfK Industry & Practice Areas. Her background encompasses building marketing research models that align traditional and digital techniques, Social Media Listening and Media Analytics/Measurement. She has extensive experience in designing methodologies across industry with extensive experience in Automotive, Consumer Products and Technology. Her work addresses the unique requirements of clients in areas such as product development, issues management, consumer insight, marketing strategy and corporate/brand reputation.

Natasha holds a BA in Psychology from the University of Michigan and a Masters in Organizational Psychology from Columbia University. In 2013, Natasha was the recipient of the 4 Under 40 Marketing Research Emerging Leaders award from the American Marketing Association.

Course Objectives:
Course participants should be able to learn an array of digital research techniques, understand the advantages and shortcomings of each, and understand how to integrate them with current methods through relevant examples within public opinion and survey research.

Who Should Attend:
The course is ideal for researchers, staff, faculty, community evaluators, agency administrators and advanced students interested in including digital research in their methodological toolkit.

Back to Top

Course 6

Conducting Better Mixed-Mode Surveys
Thursday, May 15, 8:00 a.m. – 11:30 a.m.

Course Overview:
With the growing possibilities for mixed-mode designs, this short course focuses on the joint use of web and mail to improve response rates and data quality. Although mail-only household surveys using address-based sampling provide better household coverage, many surveyors are reluctant to use postal questionnaires. Data quality problems from intensive branching and item-nonresponse are among their concerns. In this workshop, effective methods will be described for using mail contact to push responses to the web, while using a mail response option to obtain answers from households that are unlikely and/or unable to respond over the web. This will include multiple examples of questionnaires and implementation procedures found effective in achieving this goal.

The course covers such topics as the visual layout and design of questionnaires and contacts, minimizing measurement differences across survey modes, use of incentives, necessary articulation of sequential contacts, unit and item response rate effects, and non-response error. In addition a significantly updated theoretical framework will be presented for guiding decisions on how to coordinate the use of multiple contact and response modes. The content of this short course relies heavily on recent experimental research carried out by the author and his research team at Washington State University.

Instructor:
Dr. Don A. Dillman
is Regents Professor in the Department of Sociology and Social and Economic Research Center at Washington State University in Pullman, Washington, where he maintains an active research program on improving how surveyors ask questions and obtain quality answers across survey modes. A former president of the American Association for Public Opinion Research (2001-2002), he served previously at the U.S. Census Bureau as its Senior Survey Methodologist (1991-1995) where he provided leadership for redesigning data collection procedures used in the 2000 Census. He is author (with Jolene Smyth and Leah Christian) of the 4th edition of: Internet, Phone, Mail and Mixed-Mode Surveys: The Tailored Design Method that will be published in late 2014.

Course Objectives:
Attendees should be able to implement best practices for multi-mode research in their own studies for recruitment and cooperation, and for ensuring data quality across modes.

Who Should Attend:
The course is intended for practitioners and researchers with varying degrees of proficiencies in survey design. No prerequisite skills or experience are assumed.

Back to Top

Course 7

Total Survey Error in Project Management
Sunday, May 18, 8:00 a.m. – 11:30 a.m.

Course Overview:
Surveys that use probability sampling are becoming more difficult to manage. Response rates are falling, costs are rising. The Total Survey Error (TSE) framework is a tool for understanding and improving survey data quality. The TSE approach summarizes the ways in which a survey estimate may deviate from the corresponding value in the population. It highlights the relationships between errors and the ways in which efforts to reduce one type can increase another, resulting in an estimate with more total bias. For example: efforts to reduce nonresponse error may lead to poorer data quality.

TSE work has focused on the following areas:
• Relationships and connections between different sources of error
• Monitoring and reducing survey errors
• Errors induced in combining or replacing survey data with other data sources
• Trade-offs between error sources in multi-mode surveys

TSE is not for academics. It is a practical tool for decision making. It encourages tradeoffs between types of errors, while keeping mindful of survey costs. Thus, a tradeoff between two error sources may also be a tradeoff between cost and quality. Survey managers must strive to reach the best balance that meets the survey’s objectives, and TSE can help.

The course format will be based on case studies drawn from recent experience. Each case will be described in about 500 words, similar to but a bit shorter than the Harvard case studies used in many graduate business programs.

Instructor:
Brad Edwards
is a Vice President and Deputy Area Director at Westat in Rockville, Maryland. He is currently working on household and establishment surveys in long-term care, health care costs, and tobacco use. He is co-chair (with Stephanie Eckman) of a 2015 international conference called “Total Survey Error: Improving Quality in an Era of Big Data.” He was coeditor of the 2010 Wiley monograph Survey Methods in Multinational, Multiregional, and Multicultural Contexts (Janet Harkness, lead editor), winner of the 2013 AAPOR Book Award, and he is co-editor of a forthcoming Cambridge monograph, Hard-to-Survey Populations (Roger Tourangeau, lead editor).

Course Objectives:
Attendees should be able to implement TSE principals in their own work, specifically:
• Controlling multiple error sources in real-time
• Benefiting from innovative paradata in controlling error
• TSE control in panel surveys
• Quality profiles: costs, benefits and exemplary approaches
• Trading off quality dimensions: user perspectives on data accuracy vs. other quality dimensions (timeliness, comparability, accessibility, etc.)

Who Should Attend:
The course is intended for survey practitioners interested in innovative approaches to reducing survey error. No prior knowledge is assumed, though familiarity with project management, and its difficulties, would help.

Back to Top


Conference Countdown

Thank you to our 2014 Annual Conference Sponsors


Platinum

Abt SRBI

Westat

Marketing Systems Group

The Nielsen Company

NORC

Mathematica

RTI International

ICF International

The Urban Institute

American Institutes for Research

GFK


Gold


Silver


Bronze


Exhibitor Only