Report on 2008 Pre-Election Primary Polls
An Evaluation of the Methodology
the 2008 Pre-Election Primary Polls
Nancy Mathiowetz, Past President
The Ad Hoc Committee on the 2008 Presidential Primary Polling was appointed in February 2008 in response to the miscalling of the New Hampshire Democratic presidential primary. On March 30th, the committee issued its report, the text of which can be found on AAPOR'swebsite.
The committee members volunteered their time in this endeavor, and AAPOR thanks them for their efforts:
Glenn Bolger, Public Opinion Strategies
Darren W. Davis, University of Notre Dame
Charles Franklin, University of Wisconsin-Madison
Robert M. Groves, Institute for Social Research, University of Michigan
Paul J. Lavrakas, Methodological Research Consultant
Mark S. Mellman, the Mellman Group
Philip Meyer, University of North Carolina
Kristen Olson, University of Nebraska-Lincoln
J. Ann Selzer, Selzer & Company
Michael W. Traugott (Chair), Institute for Social Research, University of Michigan
Christopher Wlezien, Temple University
The work of the committee was supported in part by a grant from the University of Michigan Institute for Social Research. AAPOR also wishes to acknowledge the work of Courtney Kennedy (who conducted the analysis of the polling data presented in the report as well as participated in the design, layout and drafting of the report), Colleen McClain and Brian Krenz, who conducted a content analysis that is included in the report, and N.E. Barr, who edited the report.
AAPOR would also like to recognize and commend the polling organizations who promptly supplied micro-level data so that the committee could pursue its work: CBS News, Field Research Corporation, Gallup, Opinion Dynamics, Public Policy Institute, SurveyUSA, and the University of New Hampshire. These organizations provided information well beyond what is required by the AAPOR code.
The findings from the report are extensive and members are urged to read the full report. We highlight a few of the major findings here. The results show that several methodological factors combined to undermine the accuracy of predictions in New Hampshire, South Carolina, Wisconsin and California:
- The compressed caucus and primary calendar. Polls conducted before the New Hampshire primary may have ended too early (Sunday evening) to capture late shifts in the electorate's preferences there.
- Limited number of callbacks. Most commercial polling firms conducted interviews on the first or second call, but respondents who required more effort to contact were more likely to support Senator Clinton. Instead of continuing to call their initial samples to reach these hard-to-contact people, pollsters typically added new households to the sample, skewing the results toward the opinions of those who were easy to reach on the phone, and who more typically supported Senator Obama.
- Non-response patterns, identified by comparing characteristics of the pre-election samples with the exit poll samples, suggest that some groups who supported Senator Clintonâ€”such as union members and those with less educationâ€”were under-represented in pre-election polls, possibly because they were more difficult to reach.
- Variations in likely voter models could explain some of the estimation problems in individual polls. Application of the Gallup likely voter model, for example, produced a larger error than was present in the unadjusted data. The influx of first-time voters may have had adverse effects on likely voter models.
The committee also concluded that several factors were unlikely to have contributed to estimation errors in the New Hampshire pre-primary polls. Among these factors was the so-called â€œBradley effect.â€ Other factors that the committee discounted:
- The exclusion of cell-phone-only individuals from samples did not seem to have an effect.
- The use of a two-part candidate preference or â€œtrial heatâ€ question â€“ intended to reduce the number of â€œundecidedâ€ responses â€“ does not appear to have affected distributions of candidate preference.
- There is little evidence that Independents made a late decision to vote in the New Hampshire Republican primary.
The collection of information from the multiple polling organizations allows for comparisons of key methodological details concerning presidential primary polling. For example, the report documents the wide variation in the â€œtrial heatâ€ question used in New Hampshire, as well as differences in sampling frames and the selection of the respondent.
A special panel session concerning the findings of the committee will be part of the Association's Annual Conference in May. In addition, the report's findings will be the subject of an AAPOR-organized special invited session at the Joint Statistical Meetings (JSM). â€œFactors Affecting the Accuracy of the 2008 Presidential Election Pollingâ€ will be held on August 3, 2009, at 10:30 a.m. at the JSM in Washington, D.C.
Apart from the substantive findings of the committee, the process of obtaining the information as well as the details available from the various polling organizations suggest that AAPOR's Code of Professional Ethics and Standards, in particular the section related to disclosure, may need revision. As part of its report, the committee included a transmittal memo that urged AAPOR to examine its current disclosure standards in light of current methods of data collection. Specifically, the memo outlines these issues:
- The world of survey research now uses more complicated and diverse sampling frames and selection techniques. We inhabit a world where the sampling frames used for studies of the same population are quite diverse, where the separation between volunteering and being approached because you were selected randomly is fuzzy. The field now uses technologies where the selection of respondents is not straightforward (e.g., IVR measurement).
- The world of survey research uses more complicated and diverse statistical adjustments for errors of non-observation. Propensity models are increasingly used as an adjustment tool; some firms claim their models are trade secrets, not to be disclosed. The adjustment for non-response is combined with adjustment for coverage and likely voting in ways that cannot be disconnected.
- The current world of survey research uses sampling techniques that do not easily yield themselves to proper sampling variance estimation. Techniques that cannot assign known probabilities of selection to sampling frame elements produce sampling variance estimates with great difficulty. Extrapolation from variance estimates for simple random samples is inappropriate; more detailed guidance on disclosing such techniques and their implication would serve the membership well.
At its March meeting, the AAPOR Executive Council moved to appoint a committee to review the current disclosure standards. The committee will examine these and other issues to ensure AAPOR's disclosure standards are in tune with current practice.
Back to top