AAPOR
The leading association
of public opinion and
survey research professionals
American Association for Public Opinion Research

2008 Presidential Address

2008 Presidential Address from the AAPOR 63rd Annual Conference

The Quagmire of Reporting Presidential Primary Election Polls by Nancy A. Mathiowetz




It has been both an honor and a privilege to serve as president of AAPOR for these past 12 months. This year has left me indebted to the efforts of many others: the elected members of the executive council and the volunteers who serve both the national association and the local chapters. Their labors, their willingness to volunteer, whether to help write position papers, to review press releases, to provide wise and sage counsel, in essence, their generosity and their hard work have served us all well during a very eventful year. And an eventful year it has been. 

Under the Spotlight: Presidential Election Polls

The presidential election year, or in the case of the 2008 election, the presidential election years, and the corresponding volume of polling activity and the coverage of those polls, is a time during which a bright light shines on our profession. And so, with a nod to Charles Dickens, let me offer a simple summary of the year that has been. It was the best of times. It was the worst of times. I’d like to take a few minutes today to examine our performance during this past year and ask: How have we done? How have we rated with our constituencies? The news media? Policy makers? The public at large? And I would tell you that there are two things, two imperatives that strike me as essential if we are to enhance the usefulness and credibility of our work going forward. Those imperatives are first, humility and second, education.

I will speak further on education in a few minutes, but you may ask why I mention humility. I’m not speaking of humility the way Ted Turner thinks about it. He is quoted as saying, “If only I had a little humility, I’d be perfect.” In public-opinion research, our humility is already there and is inherent in the fact that we’re dealing with human nature and thus our science will always be inexact.

We—and here I mean the collective we—survey researchers, journalists, the media, and the public—need to remember that the respondents to our surveys and polls are not static objects. The dynamic 2008 presidential primary season has served to remind us of the fragility of our profession, especially when the focus of our work is predictive polling.

Not only does the dynamic nature of the humans we measure complicate our estimates, but we also know that our methods are, by their very nature, subject to error. We would be well served to remember the words of Albert Einstein when conducting our election polls: “No amount of experimentation can ever prove me right; a single experiment can prove me wrong.” With that in mind, let's look at the year that has been.

The 2008 Presidential Primaries: The New Hampshire Experience

On January 8 of this year, Senator Hillary Clinton won the New Hampshire Democratic primary yet all the pre-election polls had forecast a win for Senator Obama. Postprimary newspapers were splashed with headlines like: “New Hampshire's Polling Fiasco” and “Why the Polls Are So Wrong.” And in a not untypical commentary the next day Keith Olbermann on MSNBC was using terms like “shoddy methodology” and “wildly wrong.” He raised the question “Is the poll disaster as monumental as it seems?”

I certainly do not need to remind this audience of the lessons learned from the 1948 pre-election presidential polls. The response to the erroneous election forecasts of 1948, specifically the report of the Committee on Analysis of Pre-Election Polls and Forecast (Wilks et al. 1948), called for increased research in the areas of sampling, interviewing, and the basic sciences, in particular social psychology. The words of that 1948 committee—and the necessary tone of humility—still hold true:The polls also failed to provide the public with sufficient information about the methods of poll operation to permit assessment of the degree of confidence that could be placed in the predictions. The number of cases used, the type of sampling employed, the corrections introduced, and how returns from individuals who did not know for whom they would vote were tabulated, were not discussed adequately. …pollsters and social scientists have an important responsibility for educating readers of poll results to evaluated them and understand their limitations. (p. 611)

Certainly there did seem reason for humility among professional pollsters in the days after New Hampshire. But the need for humility reaches far beyond our own field. Let's begin by looking at a few examples of the coverage of the polls immediately before the January 8.

First there was David Broder's piece in the Washington Post just two days before the primary. In his concluding line, based on the recent estimates from the polls, Broder wrote that “Any way you view it the race is now Obama's to lose” (Broder 2008).

Second, there's the coverage of the McClatchy–Mason–Dixon Poll. The McClatchy results were released on Sunday, January 6, two days before the primary, and were the main focus of Meet the Press that morning. The results of the survey indicated a close race: 33 percent for Senator Obama, 31 percent for Senator Clinton, 17 percent for Senator Edwards, and 7 percent for Governor Richardson. The discussion on Meet the Pressfocused on Senator Obama's lead in the poll. Indeed, one of Tim Russert's guests concluded (once again, based on the estimates from the poll) that Senator Obama was going to be the nominee of the Democratic Party.

The discussion completely ignored the fact that the estimates indicated a race that fell within the margin of error. In addition to margin-of-error issues, one fact never surfaced—the results from another question concerning the fluidity of the race. As can be seen in figure 1, 30 percent of the respondents indicated that they might still change their mind; among Senator Clinton's and Senator Obama's supporters, more than a quarter indicated some degree of uncertainty.

Figure 1
McClatchy Mason–Dixon MSNBC Poll (Press Release).

Similarly, there was a CBS News Poll released the day before the primary (see figure 2). There are two major bullet points in the top half of the first page of the press release. The top bullet indicates that 26 percent of prospective voters had changed their mind between November and early January. The second bullet emphasizes the continuing fluidity of the decision process, with 28 percent of Democratic voters saying their minds could still change. Yet the coverage of the CBS News Poll rarely addressed that important detail. Typical of the coverage is the headline shown in figure 3 (CBS News 2008).

Figure 2
Press Release, CBS News Poll.

Figure 3
Headline Reporting Findings from CBS News Poll.

A reasonable post-mortem, then, is this: The polls did, in fact, indicate a highly fluid race in the days immediately prior to the voting. The warning flags were there; a victory by Senator Obama in New Hampshire was not a given. But few media outlets emphasized or even mentioned this profound uncertainty presented by these polls.

The point I hope to make here is not one of criticism of either journalists or pollsters. Instead, I want to point to the importance of communication between journalists and pollsters. There is a constant tension between those who produce poll results and the media. If polling organizations present their results accurately and cautiously, their press releases may appear dull, lacking in news value. If the press releases describing poll and survey findings are couched in dramatic terms, they may capture more attention, but may run the risk of undermining the very scientific tenet on which we base our work.

AAPOR's Role

What, then, is AAPOR's role in this relationship between the media and survey researchers and pollsters? In his presidential address in May 2000, Mike Traugott addressed this same issue and stated: “[T]he key to providing more and better information to citizens is to improve the analysis and interpretation of reporting about public opinion by increasing our interactions with journalists” (Traugott 2000, p. 381).

Mike added that AAPOR needs to establish and maintain improved relations with journalists so that we can “help at the point at which they are preparing stories, rather than critiquing their performance” (p. 381). So how have we been doing? And what more could we do?

AAPOR members have in fact been providing education in a variety of venues for journalists. Our membership includes journalists and many of our members are involved in the training of journalists through a variety of venues—short courses, seminars, and university-based courses to name but a few. These efforts have not only provided help to journalists trying to sort out complicated survey data under deadline pressure, but they have also served to identify AAPOR and its membership as a resource for those wrestling with understanding the methods we employ.

Another example is the way AAPOR has teamed with the Poynter Institute over the past year to develop a course for Poynter's online journalist training project, “NewsU.” This step toward fulfilling Mike Traugott's vision was accomplished through the extraordinary efforts and organizational talents of Mollyann Brodie, Chair of the Education Committee. Titled “Understanding and Interpreting Polls” (http://www.newsu.org/courses/course_detail.aspx?id=aapor_polling07), this is a four-part course, free to the public, that helps journalists better understand how polls are conducted, what questions to ask about a study's methodology, and what the margin of error really tells us about an estimate.

In addition to Mollyann, there are many others to whom we owe thanks for these efforts. Mollyann has had excellent support from her assistant, Allison Kopicki, as well as other members of the Journalist Education Committee: Rich Morin, Cliff Zukin, and Mike Traugott. The content of the NewsU course is the fruit of their labors. They in turn have had the benefit of timely and detailed reviews by several AAPOR members, most notably, Mick Couper, Rob Daves, Scott Keeter, and Stanley Presser. Their efforts, on behalf of AAPOR and the profession as a whole, have been Herculean. These endeavors are a great start, but there is much more we can do and should do.

We need to continue to develop ways to be more effective in addressing the questions posed to us by journalists and the public. Many of these questions focus on issues of methodology, very nettlesome questions for which we rarely have a simple, eloquent answer. For example:

How accurate or reliable are data from internet panels? We know that criticism of estimates based on these self-selected samples abound. Counterarguments are made that probability samples with low response rates are no better.

Another thorny issue: Are data collected via Interactive Voice Recognition systems as good as those collected using live interviewers? Some experiments have begun to address this question. But the literature is silent on comparisons of a survey conducted completely via IVR with one conducted by live interviewers.

Such questions are best addressed through the scientific endeavors of our members. We can, and should, serve as a gateway between survey researchers and the general public with regard to methodological factors that impact the quality of survey estimates.

In at least one area, we have served this role for years. AAPOR's Standard Definitions are the reference manual and model for response rate and cooperation rate calculations used throughout the profession. Other resources that AAPOR provides, for example, our statement on push polls or our code of ethics with respect to minimum disclosure, illustrate the role AAPOR can serve, both for journalists and the public.

Some Thoughts for Moving Forward

These volunteer efforts have brought us to the place where we are today with respect to the education of journalists and the public about our craft, our profession. These efforts are what made 2008 “the best of times” for me serving as AAPOR's president. But we cannot, as an organization, continue to rely on the altruistic and generous nature of our members in their service to the organization. If we are to continue to serve as a voice to and as a resource for journalists, we are at a juncture where we now must financially support these efforts.

We would be remiss to not take advantage of the inroads that have been made this year. So, let me close by outlining some proposals for your consideration.

First, that AAPOR formally adopt the Journalist Education subcommittee as a standing subcommittee under the auspices of the education committee. The sole focus of this subcommittee would be broadly defined as journalist education.

Second, unlike many of our other AAPOR committees, I would propose that we provide financial and staff support for these activities. During precarious economic times such as this, it may seem brash to suggest increased spending, but I do believe with a concerted effort, we can and should channel some of our revenue into such efforts. We simply cannot continue to be a voice and resource for the journalism community via voluntary efforts alone.

Third, we need to begin a more intensive dialog with journalism programs at universities to offer our assistance in the training of the next generation of journalists with respect to survey methodology. I realize that journalists have a broad array of topics for which they must be well versed. I am not suggesting that opinion polls are more important than understanding meta-analysis or clinical trials. But most of the data that form the foundation of media stories—whether coverage of politics, issues related to the nation's leading economic indicators, or concerns with the health and well-being of individuals in our society—are based on survey data. A sound understanding of the nuances of our methods is essential if the next generation of journalists is to accurately cover these issues.

Finally, I would encourage AAPOR to establish ongoing relationships with foundations to further our efforts in these areas. Foundations offer us the opportunity to draw upon a range of resources for the mutual benefit of our respective missions. The partnership with the Poynter Institute (funded by the Knight Foundation) stands as an excellent example.

This will not be an easy path to follow. It may require us to consider tightening our belts in some areas and to attempt fund raising to support our efforts. But I firmly believe that such efforts—coupled with a sprinkling of humility about our craft—will position us well in the years to come, and allow us to build on our strengths and to more effectively improve journalists’ and the public's understanding of survey and polling results.