Search for:
Member Login
How to Access Our New Online Portal
My Profile
Renew Your Membership
AAPORnet Listserv
Membership Directory
Retrieve Password
Logout
The leading association
of public opinion and
survey research professionals
American Association for Public Opinion Research
My AAPOR
+
How to Access Our New Online Portal
My Profile
Renew Your Membership
AAPORnet Listserv
Membership Directory
Retrieve Password
Logout
Membership
+
Join AAPOR
Membership Options
Online Dues Renewal
Online Member Application
Volunteer
Chapters
Midwest (MAPOR)
New England (NEAAPOR)
New York (NYAAPOR)
Pacific (PAPOR)
Pennsylvania-New Jersey (PANJAAPOR)
Southern (SAPOR)
Washington, DC (DC-AAPOR)
Affinity Groups
AAPI Research & Affinity Group
Cross-cultural and multilingual research affinity group
GAAPOR
HISP-AAPOR
QUALPOR
Survey Research Teaching Affinity and Interest Group
Membership Directory
For Students
Graduate Degree Programs
Sudman Student Paper Competition
Membership Research
Publications/Media
+
AAPOR Publications
Public Opinion Quarterly
JSSAM
Survey Practice
Standard Definitions
Reports
AAPOR Newsletter
Newsletter Archives
AAPOR Press Releases
Archived Press Releases
AAPOR Statements
Use of AAPOR Logo
Conference/Events
+
View Calendar
COVID-19 Workshop Series
Annual Conference
AAPOR Swag Shop
Upcoming Conferences
SurveyFest
Past Conferences
2022 Annual Conference Recap
AAPOR Awards
2022 Award Winners
2021 Award Winners
AAPOR Award
Book Award
Inclusive Voices Award
Warren J. Mitofsky Innovators Award
Policy Impact Award
Public Service Award
Seymour Sudman Student Paper Competition
Roper Fellow Award
Student Conference Award
AAPOR Returning Member Travel Award
Student Poster Award
The Student-Faculty Diversity Pipeline Award
Monroe G. Sirken Award
Harkness Student Paper Competition
Standards/Ethics
+
AAPOR Code of Professional Ethics and Practices
Disclosure Standards
Survey Disclosure Checklist
Schedule of Procedures for Code Violations
Transparency Initiative
What is the TI?
How to Join the TI
Frequently Asked Questions
How does the TI help the public evaluate and understand survey-based and other research findings?
Educational Materials
Contact the TICC
Institutional Review Boards
AAPOR Guidance for IRBs and Survey Researchers
IRB FAQs for Survey Researchers
Consent
Additional IRB Resources
Standard Definitions
Response Rate Calculator
Condemned Survey Practices
RDD Phone Survey Introduction
Best Practices for Survey Research
Report an AAPOR Code Violation
Education/Resources
+
Telephone Consumer Protection Act (TCPA)
Election Polling Resources
Online Education/Webinars
Webinars
My Webinars
Purchase Recordings
AAPOR Webinar Package FAQs
AAPOR Webinar Kits
Institutional Subscriptions
Webinars FAQ
Insights Professional Certification
AAPOR-JPSM Citation Program
AAPOR-JPSM Citation Registration
Reports
Career Center
For Researchers
For Media
Response Rate Calculator
About Us
+
Leadership
Executive Council
Committees and Taskforces
Executive Council Meeting Minutes
Past Presidents
Annual Reports
History
Heritage Interviews
A Meeting Place and More
"Back in the Olden Days"
Presidential Addresses
AAPOR Archives Highlights
AAPOR Interactive Timeline sponsored by NORC
T-Shirt Contest Winners
Who We Are
Mission & Goals
Bylaws (as amended October 21, 2020)
Strategic Plan
Diversity
AAPOR Conduct Policy
Donate/Gifts to AAPOR
Planned Giving
Become an AAPOR Sponsor
Conference Exhibits and Sponsorships
Year-round AAPOR Sponsorship Opportunities
Sustaining Sponsorship Program
Career Center
In Memoriam
Staff/Contact Us
Privacy Policy
Terms and Conditions of Use
Procedures for Requesting Removal of Infringing Material
Home
—>
Education/Resources
—>
Election Polling Resources
Herding
With the high-stakes surrounding elections, pollsters feel increased pressure to accurately capture who will win an election. Additionally with multiple pollsters releasing results on the same basic questions at about the same time, political pollsters want to avoid being seen as the one firm that got it wrong. To avoid raising questions regarding the accuracy of their results, some political pollsters adjust their findings to match or closely approximate the results of other polls—a practice known as “
herding
.”
“Herding” specifically refers to the possibility that pollsters use existing poll results to help adjust the presentation of their own poll results. "Herding” strategies can range from making statistical adjustments to ensure that the released results appear similar to existing polls to deciding whether or not to release the poll depending on how the results compare to existing polls.
By drawing upon information from previous polls, herding may appear to increase the perceived accuracy of an individual survey estimate. A troublesome potential consequence of “herding” is that survey researchers who practice herding will produce artificially consistent results with one another that may not accurately reflect public attitudes. This perceived consistency of public opinion could instill a false confidence about who will win an election, thereby impacting how the race is covered by the media, whether parties devote resources to a campaign, and even if voters think it is worthwhile to turn out to vote.
The potential problems caused by herding are particularly significant for analysts who take averages of poll results to produce a summary estimate of each candidate’s support, a practice called “
aggregation
.” CNN, for example, selected its participants for
its 2016 Republican presidential debate by averaging the results of 14 polls from August to September of 2015
. Such an average is only valid if each survey result used to compute the estimates is an independent measure of public opinion—a requirement that is broken if one or more of the survey results were adjusted to appear similar to the findings of earlier polls. If herding occurred, the final averages would give unfair weight to earlier polls and may miss more recent changes in candidate support.
It is difficult to prove the existence of “herding” because pollsters rarely, if ever, disclose the practice, which is condemned by most pollsters. Similar poll results are not, by themselves evidence of herding as it is possible that public opinion is as stable as the polls suggest. However, it is important to be cognizant of the potential implications of herding. Treating the polls as independent assessments of the state of the race – as is typically done by polling aggregators – and treating the similarity of poll results as suggesting a greater clarity is misleading if that agreement is a result of the pollsters taking cues from one another. The bottom line is that one should be careful about interpreting what similar poll results imply – particularly when the polling results agree more than what one might expect based on chance variation alone.
Download PDF Version
My AAPOR
Membership
Publications/Media
Conference/Events
Standards/Ethics
Education/Resources
About Us
Telephone Consumer Protection Act (TCPA)
Election Polling Resources
Online Education/Webinars
Webinars
My Webinars
Purchase Recordings
AAPOR Webinar Package FAQs
AAPOR Webinar Kits
Institutional Subscriptions
Webinars FAQ
Insights Professional Certification
AAPOR-JPSM Citation Program
AAPOR-JPSM Citation Registration
Reports
Career Center
For Researchers
For Media
Response Rate Calculator
Member Login
Join AAPOR
Donate