AAPOR
The leading association
of public opinion and
survey research professionals
American Association for Public Opinion Research

2018 Presidential Address

2018 Presidential Address from the 73rd Annual Conference

Legitimacy, Wicked Problems, and Public Opinion Research




Legitimacy, Wicked Problems, and Public Opinion Research
 
Timothy P. Johnson
May 2018

"Since its founding in 1947, AAPOR has, alongside quantitative social science and survey research, enjoyed considerable prestige … and influence.
 
Our industry has grown.  It has thrived.  It has shown itself to be highly adaptable.  It has developed and implemented new methodologies time after time — new methodologies that address ever-evolving social environments … and even faster-evolving technological environments.
 
During the latter half of the Twentieth Century, pollsters were at times scientific celebrities. They were courted by presidents … consulted by industry … respected by the public.
 
Survey research today is even more prevalent.  It is used to acquire the information necessary to manage more effectively almost every aspect of our complex economy … and to manage our equally complex government.
 
But survey research has not been without its critics, especially in the academic arena.
Sociologist Aaron Cicourel published an important critique of survey methodology back in the early 1960s. He focused powerfully on the absence of ecological validity in data derived from questionnaires and interviews.
 
Another sociologist — Alvin Gouldner — published his most famous work in 1970.  In it, he dismissed survey research as nothing more than “market research for the welfare state.”
 
Twenty years later, in 1990, Lucy Suchman and Bridget Jordan published an influential paper in the Journal of the American Statistical Association. They tried to address a still-unresolved tension in our field — that between the survey interview as an interactional event, on the one hand … and, on the other hand, the survey interview as a neutral measurement instrument.
 
In all fairness, many researchers have thoughtfully considered these concerns and developed new methods and strategies to address them. But as we all know, survey research is today facing other critiques… some of which present us with what some feel are seemingly impossible challenges. These have created a new environment for survey research, one in which intersecting sociological, technological, and political pressures are converging in a perfect storm to delegitimize and devalue survey research. This is what I would like us to consider today.
 
The Merriam-Webster dictionary defines de-legitimize as: “to diminish or destroy the legitimacy, prestige, or authority of” something.The Free Dictionary’s definition says that de-legitimization is “to discredit the public or political recognition or support of” something. Throughout history, in fact, there are many examples of once trusted institutions or activities who became de-legitimized by factions in society.
 
Today a number of symptoms tell us that survey research is becoming one such target. A few such symptoms are listed in this diagram.
Johnson-figure-1-white-version.png
You’ll note the bi-directional arrows here, as I’m thinking these processes are somewhat reinforcing of one another.
First, are indications of declining public trust or confidence in social statistics. Some believe that statistical data are either impossible to construct accurately or are commonly manipulated to support pre-existing beliefs or policy agendas.
 
This may be in part due to lack of knowledge among the public about how statistical data are collected or aggregated. It does not help that media reports of statistics can sometimes be confusing or contradictory.
 
Do those clinical trials really suggest a link between coffee consumption and cancer, or is coffee a protective factor?
 
Are the monthly unemployment rates really being manipulated by whoever happens to be in the White House?  Of course, it depends on who you ask.
 
There was a Marketplace-Edison Research Poll conducted in late 2016.  It reported that one-quarter of all adults in their sample had no trust at all … in data reported by the federal government about the US economy. Not surprisingly, lack of trust also varied dramatically by major party support.
 
Then there was a recent Economist/YouGov Poll.  It also asked about public perceptions of the accuracy of several established sets of government statistics. Their findings show, consistent with the Marketplace-Edison results, that only between 20-25% of adults believe these federal statistical systems provide accurate information. Ouch.
 
Joel Best is the author of multiple books concerning the general quality of social statistics.  He suggests that statistical information can be intimidating and is often used as a weapon during debates.
 
Edward Tufte, the noted expert on data visualization, agrees. Tufte says “statistics are not commonly used as a means to open a problem for conversation and deliberation, but as a weapon to intimidate and close discussion.” Of course, none of us wants this.  But these beliefs are likely far more widespread than we would like to think.
 
It probably doesn’t help that one of the best examples of the weaponization of statistics is the common use of official government Census data to gerrymander the nation’s congressional districts. Chicago is no slouch in this regard.  Here’s my favorite: the 4th Congressional District. There’s UIC’s campus.
 
Also, recently announced plans to add a citizenship item to the 2020 Decennial Census seem to have a clear political motivation, one that will cause many to question the accuracy, quality, and objectivity of this fundamental statistical resource.
 
This leads us into a second symptom.  Research by public opinion researchers demonstrates — ironically — that Americans do not especially trust public-opinion researchers. In March of last year, the McClatchy-Marist Poll asked a national sample how much they trusted seven public institutions. Among those examined, public opinion polls ranked second to last. Thank God for Congress.
 
Data on this topic are somewhat hard to come by, but there is other evidence. In the eight years from 1998 to 2006, the Louis Harris Poll asked respondents if they would generally trust a variety of different types of professionals. Teachers were most trusted. During this time period, trust in pollsters dropped from 55% to 34% — a 21 percentage point drop.  This decline exceeded that of any of the other 16 professions tracked. Wonder what those numbers would look like today?
 
Exhibit A here, of course, is the 2016 Presidential election. If you’re like me, seldom does a day pass when you are not obliged to correct the declaration of a friend, an acquaintance, or a university administrator that “the surveys got it wrong in 2016.” This is going to be with us for a long time. It’s ironic that this is perhaps the one thing on which — to this day — supporters of both major parties seem to be in agreement. AAPOR did an excellent job in critiquing that widespread misconception, but it still persists, and seems to have taken root, uncritically, in many quarters of the public.
 
As a third symptom of how survey research is being devalued, there are also those declining response rates.  Indeed, they were perhaps our earliest indicator….the canary in the coal mine.
 
A variety of ongoing activities and social trends are also contributing to the devaluation of survey research. Unlike the symptoms we just went over, these seem to have fairly direct effects on the de-legitimation process.
 
One of these we can refer to as things that annoy respondents. This would include tele-marketing activities, which we’ve been concerned about now for several decades.
 
Any of you who still have a landline are aware that tele-marketing continues to be quite aggressive … and it’s not clear that the Do Not Call registry has had much of an impact.
Unwanted tele-marketing and robocalls continue to be common sources of complaints to the Federal Communications Commission. It is estimated that about 3.4 Billion robo-calls were made in the U.S. in April alone. That’s more than 10 robo-calls for every person in the U.S. per month. To make matters worse, in response to consumer complaints, smartphone vendors are developing spam warning features that will warn of or block calls from suspicious telephone numbers, including telephone survey operations centers.
 
Something else that annoys potential respondents are activities such as sugging … and frugging. Sugging, of course, is selling under the guise of surveys… while frugging is fund-raising under the guise of surveys. AAPOR was trying to do something about this more than 25 years ago, when then-Standards Chair Tom Smith wrote several articles about it in AAPOR’s newsletter. Politicians do it. Businesses do it.  Charities do it. Colleges and universities do it, especially when hitting up alumni.
 
These examples … understandably … leave many people very cynical when receiving survey requests.  And I’m sure all of you here today have your own examples you could share.
To be sure, modern life has been made so much more convenient through evolving technologies.  But for many, these can also increase fear, anxiety, and suspicion.
 
The Earl Babbie Center at Chapman University periodically conducts a Fear Survey.  In 2014, it found that the top two things that Americans were concerned about were having their identity stolen on the internet … and corporate surveillance of internet activity. Number 4 in that poll was fear of government surveillance. And those results were among actual respondents, non-respondents might well be expected to have even greater anxieties about identity theft and surveillance.
 
And then there are more direct, intentional activities that also serve to de-legitimize survey and public-opinion research. Public attempts to manipulate surveys have a notorious history. In the presidential primaries in 1996, for example, supporters of Patrick Buchanon were reported to have actively sought out exit pollsters in order to maximize support for their candidate.
 
Then there was the famous Chicago newspaper columnist Mike Royko, who frequently encouraged his readers to lie to pollsters. Why would he do that? He argued that they should lie because pollsters were ruining “what used to be the most entertaining and exciting part of an election—the suspense of watching the results trickle in.” In other words, pre-election polls were spoiling Royko’s entertainment.
 
And our friend Ariana Huffington, during her campaign for a Poll Free America, suggested that those who were not prepared to just stop answering surveys could instead just make up answers.
 
But the key point here is: How can we expect the public to take our surveys seriously when some of our opinion leaders make a mockery of them?
 
One of the iron laws of survey research, I believe, is that, if you don’t like the results, you attack the methodology. Focus on those survey findings you like … all others are flawed or intentionally rigged. When pressed to demonstrate how public opinion polls are rigged, critics will accuse pollsters of deliberate over-sampling of certain types of respondents in order to achieve desired results, among other perceived sins.
 
Actually, those complaints about fake surveys we hear may not be entirely unfounded. As we all know, there do exist serious examples of scientific misconduct involving polls and surveys.
Many of you will recall that, shortly after the 2008 US Presidential election, a polling firm was publicly censured by AAPOR for its refusal to reveal even the most basic details of its methodology. To this day, it’s not clear that any of the polls which that organization claimed to have conducted … were actually real.
 
There have been other notorious examples, as well.
 
And this is an ongoing problem in academia also. Remember, it was not too long ago that a study published by political scientists in Science magazine had to be retracted. There were serious problems with how and whether the survey data were collected as reported.
 
There have been numerous other highly publicized retractions of scientific papers that employed survey data.
 
Research by Daniele Fanelli at Stanford estimates that, at some time during their careers, 2% of all scientists have falsified data. The crisis may be more widespread than many of us think.
 
And unfortunately, we now also have new forms of data fabrication to contend with. For me, the most disturbing is the appearance of automated bots that complete online surveys.
 
So we see there are many ways in which survey research can and is being de-legitimized. And many of the seemingly discrete variables to which we can link that effect are, I think, wicked problems. 
 
In Wikipedia, wicked problems are described as those that are “difficult or impossible to solve because of incomplete, contradictory, and changing requirements that are often difficult to recognize.”  If they are truly wicked, what can we do about them?
 
No surprise that we might think that more research needs to be done, and not just of the survey-research kind. Qualitative work is going to be essential to helping us understand the relevant group dynamics and how individuals are processing and judging conflicting messages coming across multiple communication channels.
 
At the same time, it is important to note that AAPOR has largely recognized these issues … and has organized to confront them.
 
Consider those things that annoy respondents, for example. AAPOR now has an ad hoc committee investigating the sugging and frugging problems and what constructive steps we might be able to take to ameliorate them. An update on their work was reported at a session earlier today.
 
That session also included a report from another AAPOR committee that is examining how to confront false accusations that are sometimes leveled against surveys for the sin of reporting inconvenient findings. Clearly, this problem grew into a cottage industry during the 2016 election cycle, and nobody pushed back against the false narrative that “all-surveys-are-rigged”.
 
While AAPOR’s mission is to measure and report public-opinion results carefully, not change them… we also have an obligation to set the record straight when confronted with inaccurate information about our research. This ad hoc committee has developed a list of potential strategies which I am hoping we, as an association of concerned researchers, will consider pursuing.
 
And AAPOR established another ad-hoc committee in the Summer of 2017, led by our incoming President, that did an amazingly quick assessment of the problem of cellular spam warnings and call blocking. We don’t have a clear solution yet, but we at least have a much better understanding of the challenge and what options might be available.
 
AAPOR also has a Task Force looking at Data Falsification.  They will be presenting an update on their work at a session later this afternoon.
 
Yet another AAPOR task force has been working with the American Statistical Association on the problem of improving the survey climate with respect to federal government surveys. We anticipate a final report from that group later this Summer.
 
AAPOR is also collaborating with the ASA on a campaign called Count on Stats, which is also focused on the issue of distrust of government statistical data.
 
And of course, we have AAPOR’s Transparency Initiative, which was first conceived in 2010 by then-President Peter Miller. That initiative is now one of AAPOR’s Crown Jewels. It comprises 87 organizational members and counting. We’ve found that the Transparency Initiative is especially attractive to the good citizens within our research community, and has helped improve their routine disclosure of methodological details.
 
It is also available to members of the news media — as a screening mechanism to help them differentiate trustworthy survey sources from those may not be. Of course, we are hoping the news media will take greater advantage of this resource in the future.
 
This is one of our ongoing challenges … the importance of communicating to the public that serious standards do exist for the conduct of credible survey and public opinion research.
 
Clearly, AAPOR has been working for some time to confront some of the wicked problems that collectively de-legitimize our work. But few of us are under any illusion that these various activities alone are going to solve our legitimacy issue. But I am hopeful that they will serve as platforms from which we can continue launching action-oriented initiatives that openly and transparently address and attempt to manage some very legitimate public concerns about our work.
 
We don’t have all the answers, but I do believe there are three things that will be needed to help us effectively confront these challenges:
  • The first is collective action…
  • The second is public education…
  • And the third is a willingness to engage critics with factual information.
Okay, so we started today with some not too happy news. What’s next?
 
Can you imagine a future in which there is no public-opinion research? Is that a realistic possibility? Let me go on record — with no data to back me up — as being highly confident that AAPOR will be here in another 27 years or so to celebrate its 100th anniversary. How we do public-opinion research then will undoubtedly be different from how it is done today, just as our methods now are different from what they were 27 years ago. 
 
The same goes for the challenges we are facing now. Some of these are indeed wicked problems.  Many of them we can only hope to manage, at best, without ever really solving. And there will certainly be new ones then that we cannot even imagine right now.
 
But here, we might want to take a lesson from — of all places — the US Congress.
Remember this slide? As my friend Bob Oldendick observes, few people like or trust our federal legislature, but everybody loves his or her own Congressperson.  And incumbents continue to be re-elected at very high rates. Likewise, even if public-opinion research is not especially popular right now, there continues to be a large and ever-expanding interest in knowing what the public thinks. Indeed, the thirst for the insights that are unique to public-opinion research are stronger than ever.
 
Our challenge, then, is to continue adapting our methods to better measure the opinions of a public that seems very interested in learning about itself, if not always eager to help us do it.
 
And here is where I’m hopeful about our future. There’s a phrase that is new this year in our conference program: AAPOR’s Got Talent. Lots of it. Creativity, ingenuity, dedication, and old-fashioned hard work — these qualities are always on display in generous quantities at our meetings. 
 
And as our membership continues to grow and diversify, so will the approaches we take in confronting our common challenges.
 
Thank you."