The leading association
of public opinion and
survey research professionals
American Association for Public Opinion Research

2006 Presidential Address

2006 Presidential Address from the AAPOR 61st Annual Conference

The Future Is Here! Where Are We Now? and How Do We Get There? by Cliff Zukin

I attended my first American Association for Public Opinion Research (AAPOR) conference in 1978, when we were but a fraction of our current size. Over the years I have developed a deep identification with, and great affection for AAPOR. I have always appreciated our culture of openness, sharing of information, and sense of community. I want to thank you for allowing me the opportunity to serve as president this past year. While I truly believe that service is its own reward, I have no doubt benefited greatly and grown from my experience in office. Coming from an academic background, I now have a more fully rounded perspective of opinion research as a profession and an industry.

The president of AAPOR is afforded many opportunities: to travel, to educate, to speak for the association, and to learn. One of those I enjoyed this past year was to participate in a conference on “The Crisis of Polling,” organized by Larry Jacobs of the University of Minnesota.1 Rob Daves at the Minneapolis Star-Tribune and my successor as president, arranged my participation, and I remember writing him a private e-mail meant to be humorous. It went something like, “Rob, I’m president of AAPOR and no one told me there was a crisis. Is there something I should know about?”

Are we in a crisis?


But, do we have serious issues to confront?

Yes, and some fairly immediately. And so the title of my address is “The Future is Here!

Where Are We Now? and How Do We Get There?”

This presentation is about what has been happening to our profession lately, the challenges that will confront us over the next few years, and how I think we, as an organization, should respond. In doing this I know I might open up what the mayor of a small town in New Jersey once described to me as “a box of pandoras.” So, let me begin with a reaffirmation that AAPOR is an important and vital organization, both for who we are and what we do. 

We Are the American Association for Public Opinion Research

We are an association of 1,900 scholars, practitioners, and teachers.

We represent the intersection of business, academics, and government: 40 percent of us are academics; one-third are drawn from the commercial sector; the balance come from government and nonprofit agencies.

We are the leading professional association of public opinion and survey research in the country and, I believe, the world.

We set and push the boundaries in conducting scientific research on the topics of public opinion and survey research. Among other activities, we publish the flagship journal Public Opinion Quarterly.

We collect and diffuse knowledge, such as through this conference, which has about 350 presenters and over 800 attendees.

We engage in professional training and education; about 350 people have taken short courses at our annual conference in just the last two years.

We set and uphold professional standards of ethics and best practices of research.

We are the public face of polling and survey research in America. We represent it to the outside world, monitor government regulation that may affect our practices and profession, and defend our members when under attack.

We, of course, share many of these traits with other professional associations, and in this we are not unique. What makes us unique, to my mind, is our mission and contribution. We collect and measure public opinion and feed it into the policy process so that the views and values of the citizenry have a place at the table when decisions are made. Nothing could be more fundamental to the lifeblood of a democracy. As an industry, I think we are vital to the health of our country. And this year as president has deepened my appreciation of both our contribution and the issues facing our profession.

So, first, what challenges face the AAPOR and our industry over the next few years? And second, what do we need to do, as an industry and a profession, to respond?

There are many issues we face down the road, but I see four as primary, and interrelated:

The growing problem of representative sampling;

The challenge to orthodoxy;

The commodification of polling; and

The attack on science.

Problems Facing the AAPOR

Our methodology is built on the notion—and science—of sampling. That is, we select and interview a small group of people to represent an underlying population. Unless we can do this well, we are diminished in fundamental ways and may be prevented from fulfilling our role as a public surrogate—from understanding the underlying distribution, contours, and shifts in public opinion that allow us to contribute to democratic government, determine citizen needs, and evaluate the success and failures of government programs, among others.

We are now facing an era where our operating model, or paradigm, is breaking down. Please note that I say “breaking down,” not “broken”—this is something in process. While this is complex to explain in statistical terms, it is very easy to present graphically. Figure 1 simply shows a telephone (with wires) perched on top of a normal distribution. As a profession and industry we have relied on probability random digit dial (RDD) sampling and telephone interviewing as an article of faith and our standard operating procedure for a quarter of a century now (see Tucker and Lepkowski 2006).

Figure 1.
Figure 1. Our operating paradigm.

But we need to squarely face the fact that just in the last five years it has gotten much harder to do well the essential practices that have defined the very core of our identity. There are two related keys to why this is more problematic: First, we have an increasing problem with our response rate—people taking our calls and being willing to speak with us. Second, we have an increasing problem in our coverage of the population—that is, who is in a normal RDD sampling frame. The growth of cell phones is a particular challenge to us in 2006.

There has been a tremendous amount of research on these problems recently, and I want to formally acknowledge the leadership efforts of Paul Lavrakas, who organized the recent Cell Phone Summit, and of Clyde Tucker and Jim Lepkowski, who organized the second international conference on Telephone Survey Methodology held in Miami in January 2006. Although an exhaustive summary of this literature is beyond the scope of my writing in this piece, I have been informed by the work ofBattaglia et al. (2006), Curtin, Presser, and Singer (2005), Holbrook, Krosnick, and Pfent (2006), Pew Research Center (2006), Piekarski (2005), and Tuckel and O’Neill (2006), among others. But let me quickly distill the essence of these important research findings.

There is no doubt that it has recently become increasingly difficult to find and interview a representative sample. All major analyses show significant—and recently accelerating—declines in response rate in the last 10 years.Holbrook, Krosnick, and Pfent (2006) found this trend by looking at 14 top media, government, and contract organizations. Keeter and colleagues (2004) found serious response deterioration in even “gold standard” RDD telephone samples in the last half dozen years. And in examining the Michigan Survey of Consumer Attitudes, Curtin, Presser, and Singer (2005)have documented a response rate drop averaging 1.5 percentage points a year between 1996 and 2003 (figure 2). Moreover, it is worth noting that this has been accelerating in pace and that there has been a greater drop in the contact rate than the refusal rate.

Figure 2.
Figure 2. Trend in non-response.

The cell phone numbers are equally alarming. True, our best and most recent research estimates that just 8 to 10 percent of the U.S. population in early 2006 is “cell phone only” (Blumberg and Luke 2006; Blumberg, Luke, and Cynamon 2006; Pew Research Center 2006). But this is just the tip of the iceberg, and I contend that this seriously understates the magnitude of the problem. What about “cell phone mainly”? Or “cell phone a lot”? Consider the context. By early 2006:

A majority of the American public has both cell phones and land lines (Tuckel and O’Neill 2006; Pew Research Center 2006).

About two-thirds of the population now has cell phones. Further, among those who have them, two-thirds say their cell phone is on either always or most of the time; almost 40 percent say they make almost all of their calls on their cell phones; and a majority screen their calls either all or most of the time (Tuckel and O’Neill 2006).2

Moreover, cell phone ownership and use are not evenly diffused through the population, creating nonrandom error in this coverage problem. The clearest bias is by age, which is also strongly correlated with marital status and type of dwelling (Pew Research Center 2006). My sense from reading a variety of data sources is that probably one-quarter to one-fifth of those younger than 30 years of age are cell phone only. Perhaps half are cell phone mainly. This is a serious coverage problem for us.

In this vein I want to note a recent study conducted by the Pew Research Center for the People and the Press (PRC).3 This graphic, figure 3, presented at the Telephone Survey Methodology conference by Scott Keeter and formally released shortly thereafter (Pew Research Center 2006), displays both the weighted and unweighted percentages of young people between 18 and 34 from the PRC’s RDD samplings from 1990 through 2005. The moving average lines for these points are in close proximity to each other up through the year 2002, at which point they begin to sharply diverge. By the end of 2005, there was a 10 percentage point gap between the weighted (who should have been interviewed) and the unweighted (who actually took part in the survey). Simply put, younger respondents are disappearing from our conventional RDD samples.

Figure 3.
Figure 3. Surveys reaching fewer people ages 18–34.

The increasingly difficult problem of achieving a representative sample has at least three significant consequences. It allows us to be attacked by outsiders; it opens the door to pseudoscience and salesmen with unproven claims of new and better; and it makes it harder to fulfill our mission.


The problem of representative sampling has opened a window for some to claim that all methods are equal, or, “If there are problems with yours, don’t say anything bad about mine.” This is a counterfeit currency we need to resist. In fact, AAPOR standards chair Nancy Mathiowetz and I spent a lot of time this past year reminding journalists and survey organizations that one cannot compute a margin of sampling error on nonprobability surveys, such as opt-in Internet polls. We have had cases of organizations who are flat-out inaccurate in their disclosure on this matter4 or who, while not technically inaccurate, appear to be at best confusing, or perhaps even disingenuous, in their discussion of sampling error on nonprobability surveys.5

I found one case particularly troubling, as it involved a professional organization, like AAPOR, that may not know better, but should. Many of you are familiar with the American Medical Association’s (AMA) sponsored survey on “spring break.” Based on an AMA press release, the New York Times of March 19, 2006, reported in the Sunday “Week in Review” section (Marsh 2006) that:

83 percent have friends who drank most nights on spring break;

74 percent of women use drinking as an excuse for outrageous behavior; and

57 percent say being promiscuous is a way to fit in.

The picture presented by this survey is that spring break is little more than an orgy of drunken sex for college women. Having been a university professor since 1977, I thought this to be an inaccurate and unfair characterization of people I know well—my students. Looking into the methodology of this survey, I discovered that the data that fueled these assertions were collected via an Internet-based, self-selected panel of respondents, about three-quarters of whom had never even been on “spring break.” And, yes, the findings were accompanied by a statement of sampling error claiming accuracy and representativeness using the industry standard margin of error terms.

The AAPOR and the AMA had an interesting set of conversations following the Times’s publication of the data, snippets of which I have reproduced (figure 4). In the end, the AAPOR’s efforts were not in vain—both theTimes and the Associated Press published formal corrections to the record. But the larger points are these: (1) There are many naive consumers and commissioners of survey data out there in our universe; (2) There are organizations that believe that surveys used for advocacy purposes do not need to be held to basic research standards, even if they are put in the public domain; and (3) The allure of cheap, Web-based surveys is, and will be, difficult to resist.

Figure 4.
Figure 4. The American Medical Association spring break survey.

I relate this tale not just because it is interesting and demonstrates a good AAPOR intervention, but because such situations will come with increasing frequency in the years ahead, and we will need to deal with them. In addition to Internet surveys/opt-in nonprobability polls—and the growth of Internet polls is inevitable and inexorable—we have also witnessed growth in technologies such as Interactive Voice Response, Robo-polls and automatic dialers with no in-household respondent selection procedures, and Voice over Internet Protocol, to name a few.

My observations here are not meant to be reactionary or hostile to any of these developments. Technologies themselves—cell phones, the Internet—are generally neutral. The questions are: Which methods are appropriate to answer which questions? And, are the knowledge claims being made reasonable or unreasonable for the method used?

Nor are my remarks meant to be defensive in terms of our current model of research. In fact, challenges to the dominant paradigm are healthy and to be welcomed. In fact, this is how organizations grow and change (Kuhn 1962). All organizations of long-standing value confront such challenges at some time in their lives; enduring ones face them many times. We need to innovate and adapt so as to continue to be relevant.

Moreover, AAPOR has an excellent history of accommodating changes. We have moved from personal interviews to telephone, from pencil and paper to CATI, always bounding ahead rather than dragging our feet behind. Just this year AAPORnet hosted an extremely useful and spirited discussion about the value and proper role of nonprobability versus probability surveys, and I’m proud of the way we have welcomed this debate.

So, the sky is not falling, and we do not need a knee-jerk reaction. But, we do need to recognize that our basic operating procedures are being challenged at this moment in time, and we will need to wrestle with how we separate wheat from chaff and tell good research from bad in the new age.


A third challenge facing us is to respond to an external sponsoring environment that increasingly regards polling as a commodity. In December 2005 the director of Rutgers’s Eagleton Institute of Politics received a letter from the editor of the Newark Star-Ledger, summarily terminating their polling partnership. The Star-Ledger/Eagleton Poll was a project I had started in 1983, and it had been ongoing for over two decades. It was one of the granddaddies of state polls and a fixture in the political landscape of New Jersey. This hit home to me very personally, and perhaps it had greater resonance as it happened during my AAPOR presidential year.

While most will be unfamiliar with it, the Star-Ledger is a very good newspaper. It won a Pulitzer Prize in 2005; it is the flagship paper of the Newhouse chain; its Sunday circulation is over 600,000. I know the editor well and have great respect for his intelligence and acumen. In a letter terminating a 22-year relationship he made two simple points: “It costs too much. As you know, these are tough times in our industry.” And, “The arrival of similar operations into the market has made political polling nearly a commodity.”

I gave up the directorship of that project a few years ago and have since acquired some detachment. But I do believe it to be different from the other New Jersey polling organizations in some fundamental ways. We did not come and go with elections; our surveys were not targeted to only registered or likely voters; we explored public opinion about a wide variety of public policy issues in depth—not just who was ahead in an election or a single question about the governor’s job performance rating. I do understand the motivation for other polling organizations and universities to get into the game. It gets their names in the media and attracts visibility for their institutions. However, while this is individually rational for each of them, it is becoming collectively problematic for all of us.

I think we are facing a quantity/quality trade-off and may well be at a tipping point. There appears to be a larger number of polling organizations doing shallower work (Rosenstiel 2005). Thus, we in the opinion research industry today face an ironic situation. As we become institutionalized into the fabric of American life, polling faces the danger of becoming so ubiquitous that its individual products become indistinguishable.

What’s wrong with this?

Those of us who do public opinion research need to recognize that there is a tyranny to numbers. And there is nothing as dangerous to an opinion as a number. Numbers are presented as facts in today’s society, regardless of what may be behind them. To the media, and thus to ultimate consumers (the public), all numbers are equal. Even if we should, and do, know better, how do we expect consumers and media buyers to tell good work from bad when a number is just a number? In a retrenching media industry (Project for Excellence in Journalism 2006), with survey buyers focusing mainly on the bottom line because “polling is a commodity,” how do we resist the pressures to cut corners and not do our best work?


A fourth challenge facing us is to play a role in responding to the attack on science. AAPOR is a nonideological professional association, and I do not want my observations be construed as partisan. As a professor in a public policy program, we teach Democrats, Republicans, independents; liberals, moderates, and conservatives alike. It doesn’t matter. But one of the things we teach is to separate questions of fact from questions of value. As former senator Daniel Patrick Moynihan once observed, “Everyone is entitled to their own opinion, but everyone is not entitled to their own facts.”

This decade has not been kind to us: not to academics who face a winnowing of acceptable topics of study for grants, not to commercial firms that see their public opinion and election survey work subjected to personal ad hominem attacks from both the left and right (Daves and Newport 2005), not to government researchers who face restrictions on speaking about what has become fashionably unpopular scientific research on such important topics as global warming (Revkin 2006).

I experienced the coldness of the current climate in Washington firsthand this year as one of AAPOR’s two representatives to the board of directors of COSSA—the Consortium of Social Science Associations. COSSA is an advocacy organization supported by more than 100 professional associations (such as AAPOR), scientific societies, universities, and research organizations. COSSA’s annual meeting program featured John Marburger III, presidential science advisor and director of the Office of Science and Technology Policy, David Lightfoot of the Social, Behavioral, and Economic Sciences directorate at the National Science Foundation, and David Abrams of the Office of Behavioral and Social Sciences Research at the National Institutes of Health.6

I found parts of Marburger’s presentation saddening, and the others merely sobering. Marburger’s primary message to the scientific community was that the criteria for federal support is changing, and we need to be responsive to three initials long thought to characterize private sector research: ROI, or return on investment. The president’s principal science advisor was not talking about contract research or applied research; he was speaking about basic scientific research, which has historically been undirected and unaffected by political debate and the climate in Washington. As a professional association, we need to affirm that the bottom line is knowledge, not ROI, and unfortunately, we need to be aggressive in making that case. What we once took for granted—that scientific research would remain off-limits from politics—is not true today. We have a stake in this game and cannot afford to sit out.

How Does AAPOR Respond?

So, we face some interesting challenges in the years ahead. How do we respond as an organization and a profession? I suggest four avenues of response:

A recommitment to principles of good science7;

An increased organizational and communications capacity;

The development of strategic partnerships; and

The adoption of a research function within the AAPOR.


We start by reaffirming who we are and what we do. We are public opinion and survey researchers, as our name says. So we start be embracing this role and what we call “the scientific method.” There is an ideology to science: it is the nonideological establisher of empirical and causal relationships between concepts. We employ the tool of experimentation for this purpose. We make comparisons between groups who have received different treatments and try to control all factors other than our experimental variable so as to make valid inferences about the way the world works. We should neither accept nor reject knowledge claims without evidence. In the pursuit of this ideology, here are three guiding principles.

First, we must recognize the need for and preservation of our gold standards against which new methodologies can be compared and effects computed. Notice I say “standards,” not “standard.” This is plural. We need not one but a number of benchmarks to inform our calibration. Operationally, this means we must have a strong defense of the government statistical agencies and research—the census, the National Science Foundation, and large-scale personal interviews with full coverage of the population.

Second, we need to experiment while recognizing the presumption of the status quo. Though my year as president has been fulfilling, I have also come out of it with teeth marks on my ankles from various parties advocating new methods who demanded that I prove to them why what they are doing doesn’t work. No! While we need to embrace change and innovation, the burden of proof must be centrally located on the proposed innovators. The onus must be on those who propose new techniques and methodologies to demonstrate and prove. It is insufficient to simply assert that “it works”—tell us why it does. As a concrete example: What is the theory that justifies the claim that opt-in Internet polling can and should be a representative sampling of the underlying population?

Third, we must recognize the need for full disclosure and transparency, guided by the norm of civility in our ongoing research discussion. Science needs a transparency of method to work well, to allow for replication and generalizability. AAPOR has standards of disclosure and a code of professional ethics. In this we fight the market, and many firms’ genuine needs to hold client information confidential. But we need to resist the view that “the market” is the final or sole arbiter. In many areas of society, the market is not the only operative force. This is, after all, the logic behind government regulation in so many areas. For example, we do not let drugs come to market without being approved. But of course government regulation is only one form of regulation, and perhaps the last resort in some cases. Professional organizations also regulate and police from the inside We need to insist that experimentation be done according to our standards and best practices. And we need to make sure that we respond professionally, not personally—respecting the norm of civility. New ideas should not be rejected out of hand as threatening; they need to be tested empirically and discussed dispassionately.


Next, we need to increase our organizational capacity to respond and communicate, both internally and externally. We need to communicate our positions and values to the outside world, and we need to diffuse ideas more quickly within our profession. I believe we have taken important first steps to accomplish this goal at our annual meeting this year in Montreal. I am delighted to report that, after a year of planning, AAPOR’s Executive Council approved at its meeting on May 17–18, 2006, two initiatives that I think will strengthen the AAPOR in the future.

For the first time in our history, AAPOR is going to have a full-time professional staff person. With almost 2,000 members and a budget of almost a million dollars, this is long overdue and is a major step forward for us. Come July, we hope to have on board a communications director. This person will direct the outreach activities of the association. This is not a decision-making position; the president and council will still set policy for the association. However, this increased staff capacity will make us more able to implement policy and to respond quickly to anything we agree is bad science or an attack on our industry.

Second, we have authorized the development of a new publication, a quarterly e-zine to quickly disseminate research and news to survey and opinion research professionals. This proposal was developed by Bob Groves and Sandy Berry; John Kennedy will be the first editor. We havePOQ for the dissemination of material at the highest standards of quality; we have AAPORnet for the passions of the day. As an organization the AAPOR will benefit from something to fill the gap in the middle, empowering practitioners to share knowledge.8

These changes should enable us to better assert leadership in and for the industry, increasing our voice and relevance and expanding our public presence. It is important for us to take part in debates commenting on good and bad research in those cases where we have consensual agreement and to be able to do so within the 24-hour news cycle. In addition, it is important for us to communicate methodological experimentation and innovation among ourselves. I invite readers of this article to examine the report of AAPOR’s Long-Range Planning Committee, posted on the AAPOR’s Web site (Zukin et al. 2006).


Third, we need stronger alliances with sister organizations and to grow a bit, and we are working on this. In the long-range planning report you will see for the first time a plan for AAPOR membership growth. Modest growth is good: it ensures breathing of ideas as new members come in; it gives us a bit more clout, as size is good; it reinforces our academic and commercial bases. It also allows us to socialize good methods to a larger and wider number of actors.

In an era when science is under attack, government funding of some of our gold standards is threatened, and proposed regulation needs to be monitored, we need to coordinate with partners who have shared values and goals. On May 18, 2006, the AAPOR hosted the first Market/Opinion Research Leaders summit, consisting of the elected leaders and staffs from AAPOR, the Council for Market and Opinion Research (CMOR), the Council of American Survey Research Organizations (CASRO), the Marketing Research Association (MRA), and the European Society for Opinion and Marketing Research. We have agreed on a set of protocols that will seek to identify common ground over the next year. We have also agreed to reciprocal relationships in conference attendance, exhibits at each others’ conferences, posting each others’ communications, and once-a-year mailings to respective memberships, among other initiatives. We have just started down this path, but we have common interests in, for example, protection from government regulation, promoting respondent cooperation, setting professional standards, and recruiting talented people into the profession. I agree with AAPOR’s current president Rob Daves, who is fond of quoting Benjamin Franklin’s remarks at the signing of the Declaration of Independence: “We must all hang together, or assuredly we shall all hang separately.”


Fourth and finally, I believe we ought to establish a formal research function within AAPOR. As a professional organization, we now fulfill at least four primary functions for our members: (1) the dissemination of knowledge (through the conference and Public Opinion Quarterly, among other vehicles); (2) professional education (through the short courses at conference and other activities); (3) setting and upholding standards and professional ethics; and (4) external surveillance and communication. Why do we not actively identify problems facing our industry and commission or guide the necessary research in these directions?9

Let me give two quick examples of how this could be useful.

As I have noted, polls abound, especially in election season. While they are generally “right” in aggregate (see Traugott 2005), there is also a great deal of variation in the point estimates made by different polling organizations using different methodologies (figure 5). Consider the table that shows the results of eight different polling organizations in their “last calls” in Iowa, a battleground state in 2004, as gathered by RealClearPolitics.com (2004). While Al Gore carried Iowa by 0.3 percentage points in 2000, George Bush beat John Kerry there in 2004 by about seven-tenths of a percentage point. The aggregate of the eight poll estimates, 47.4 to 47.1, shows Bush by .3—extremely close to the actual vote. But the variation around the overall mean is tremendous: SurveyUSA had Kerry by 3 points, Zogby had Kerry by 5 points, Fox had Bush by 4 points, and so on. Of course, different polling organizations used slightly different methods. To me this begs the question of “what worked ‘best’ under what conditions?” Why not commission a meta-analysis on the area of election polling methodology rather than waiting for it to possibly happen at the happenstance initiative of an external researcher?

A second example comes with the growth of cell phones, as I have already documented. There will come a time in the not too distant future when survey research may be more accurate in sampling and calling cell phones than land lines. Obviously, the pricing and regulation of calls will have to change, but I suspect they will. If you want to see our future, look to where other countries are now (Kuusela, Vehovar, and Callegaro 2006). Finland is 50 percent cell phone only; France and Italy are about twice what the United States is. Moreover, cell phones will increasingly give way to PCDs (personal communication devices) that will communicate voice, text, video, and Web. This technology is, of course, here today, but it will become increasingly prevalent in just the coming few years, and we will have to react. Folks will be sampled and surveyed however they wish to be, and this will involve mixed modes. Technology will change; regulation will change; we will change. I have no question about this vision of the future.

Figure 5.
Figure 5. Variation in individual estimates.

Figure 6.
Figure 6. Cell phone only.

Therefore, since that is where we are headed, why not embrace research on mixed modes of data collection, and perhaps even guide it to make sure results are ready when we need them. I confess I do not have a specific proposal for adding a research function. There are many models, including the possibility for external funding from the scientific community or from our industry. But I think it would be desirable to have a mechanism to decide on the central questions facing us over the next few years and either commission research to study the problem or, at least, systematically gather together what research has already been done and circulate this information to members, through white papers presented at the conference, through an e-zine, or through Public Opinion Quarterly.

On We Go

We face a number of significant challenges in the future, and this will be a difficult time for us. Issues and standards may well become more opaque before they clarify. This raises the question of whether the glass in half full or half empty. And the answer, of course, depends in turn on whether one is pouring or drinking. As for me, I’m pouring. At least on this, I am at least half full.

I have a lot of confidence in us to solve problems. AAPOR is, after all, a group that does problem-solving for a living, and we thrive on solving puzzles. It is not that we want a world where there are no problems. Rather, we want a world where there are interesting problems to work on. As Albert Einstein once observed, “If we knew what it was we were doing, it would not be called research, would it?”

I wish us all good problems to solve.