ePrivacy and GPDR Cookie Consent by Cookie Consent
The leading association
of public opinion and
survey research professionals
American Association for Public Opinion Research

Webinar Details

Survey Coding: Best Practices for Coding Open-Ended Survey Data

Jon Krosnick, Arthur "Skip" Lupia and Matt Berent
Thursday, July 18, 2013

ASA-SRMS members will receive AAPOR member pricing on webinars when registering for live webinars or purchasing recordings of webinars.
If you are an ASA member, click here to purchase.

About This Course:

Since the early days of survey research, survey organizations have asked open-ended questions. In an open-ended question, respondents answer in their own words rather than by choosing an answer from a pre-determined set of responses. After such open-ended text is collected, many survey organizations assign numerical codes to these answers to facilitate statistical inference.

Many survey organizations continue to use coding procedures that were established long ago. In recent years, researchers have discovered serious problems with some of these procedures. These problems have caused researchers, the media, and members of the public to draw incorrect conclusions about important aspects of public opinion.

Webinar participants will learn about details of a new coding procedure. The presenters developed this procedure for long-term survey project is used by thousands of survey researchers around the world. The new procedure builds on decades of research from multiple scientific disciplines. It uses insights on language, grammar, and semantics to produce systematic procedures for assigning a respondent’s words to reliable numerical codes.

Webinar participants will learn how to apply the new procedure to various types of survey answers. They will learn how to manage roadblocks that they may encounter along the way, how to document that they are not repeating past mistakes, and how to evaluate the reliability of their coding efforts. With these skills in hand, webinar participants will be better able to evaluate the quality of existing open-ended data and empowered to improve the quality of any open-ended data that they or their organizations subsequently produce.

Note from the authors about additional content:
Please click here to access the ANES database of all the publicly available information that has been produced about coding (i.e., coded data, code frame and coding reports). 


Learning Objectives:

  • Why and when open-ended questions can be valuable to survey researchers
  • How sub-optimal coding of open-ended answers can cause problems for researchers and for the consumers of research findings
  • An overview of the desirable attributes of an open-ended coding procedure
  • A step-by-step guide to how to do open-ended coding and how to prepare for roadblocks along the way
  • How to evaluate the quality of the coding you have done


About the Instructors:

ImageJon A. Krosnick, is Frederic O. Glover Professor in Humanities and Social Sciences, Professor of Communication, Professor of Political Science, and (by courtesy) Professor of Psychology at Stanford University, Senior Fellow at the Woods Institute for the Environment at Stanford, and Research Psychologist at the U.S. Census Bureau. Questionnaire design and survey research methods are focuses of his research. He has taught courses for professionals on survey methods for 25 years and has served as a methodology consultant to government agencies, commercial firms, and academic scholars. He has served as a Principal investigator of the American National Election Studies, the nation's preeminent academic research project exploring voter decision-making and political campaign effects. His honors include the Phillip Brickman Memorial Prize, the Pi Sigma Alpha Award, the Erik Erikson Early Career Award for Excellence and Creativity, two fellowships at the Center for Advanced Study in the Behavioral Sciences, and election as a fellow of the American Academy of Arts and Sciences and of the American Association for the Advancement of Science.


Arthur Lupia, is the Hal R. Varian Professor of Political Science at the University of Michigan and research professor at its Institute for Social Research. He studies how information and institutions affect policy and politics, with a focus on how people make decisions when they lack information. He was a founder of TESS (Time-Sharing Experiments in the Social Sciences) and served as a PI of the American National Election Studies. He is Chair of the Social, Economic, and Political Sciences section of the American Association for the Advancement of Science, President of the Midwest Political Science Association, Chair of the APSA Presidential Task Force on Public Engagement, PI of the Empirical Implications of Theoretical Models project, and is on the National Academy of Science’s Behavioral and Social Science Advisory Board. He is an elected member of the American Academy of Arts and Sciences, a Guggenheim Fellow, and a recipient of the National Academy of Science’s Award for Initiatives in Research and AAPOR’s Innovator’s Award.


Matthew Berent, received a Ph.D. in social psychology in 1995 from The Ohio State University, He has held a variety of academic and private sector jobs. His diverse work history has resulted in a broad range of basic and applied research skills. Among these skills are designing optimal survey questions, coding open-ended survey data, matching survey respondents to public records, product development and management, sales and marketing, and training survey practitioners. Since 2010, he has worked with Jon Krosnick and Arthur Lupia to evaluate the quality of data from the 2008 ANES Time Series Study and 2008-2009 ANES Panel Study.