ePrivacy and GPDR Cookie Consent by Cookie Consent
The leading association
of public opinion and
survey research professionals
American Association for Public Opinion Research


With the high-stakes surrounding elections, pollsters feel increased pressure to accurately capture who will win an election. Additionally with multiple pollsters releasing results on the same basic questions at about the same time, political pollsters want to avoid being seen as the one firm that got it wrong. To avoid raising questions regarding the accuracy of their results, some political pollsters adjust their findings to match or closely approximate the results of other polls—a practice known as “herding.”
“Herding” specifically refers to the possibility that pollsters use existing poll results to help adjust the presentation of their own poll results. "Herding” strategies can range from making statistical adjustments to ensure that the released results appear similar to existing polls to deciding whether or not to release the poll depending on how the results compare to existing polls.
By drawing upon information from previous polls, herding may appear to increase the perceived accuracy of an individual survey estimate. A troublesome potential consequence of “herding” is that survey researchers who practice herding will produce artificially consistent results with one another that may not accurately reflect public attitudes. This perceived consistency of public opinion could instill a false confidence about who will win an election, thereby impacting how the race is covered by the media, whether parties devote resources to a campaign, and even if voters think it is worthwhile to turn out to vote. 
The potential problems caused by herding are particularly significant for analysts who take averages of poll results to produce a summary estimate of each candidate’s support, a practice called “aggregation.” CNN, for example, selected its participants for its 2016 Republican presidential debate by averaging the results of 14 polls from August to September of 2015. Such an average is only valid if each survey result used to compute the estimates is an independent measure of public opinion—a requirement that is broken if one or more of the survey results were adjusted to appear similar to the findings of earlier polls. If herding occurred, the final averages would give unfair weight to earlier polls and may miss more recent changes in candidate support. 
It is difficult to prove the existence of “herding” because pollsters rarely, if ever, disclose the practice, which is condemned by most pollsters. Similar poll results are not, by themselves evidence of herding as it is possible that public opinion is as stable as the polls suggest. However, it is important to be cognizant of the potential implications of herding. Treating the polls as independent assessments of the state of the race – as is typically done by polling aggregators – and treating the similarity of poll results as suggesting a greater clarity is misleading if that agreement is a result of the pollsters taking cues from one another. The bottom line is that one should be careful about interpreting what similar poll results imply – particularly when the polling results agree more than what one might expect based on chance variation alone.

Download PDF Version