By Reg Baker, Associate Standards Chair
A special AAPOR task force has concluded that there is no theoretical basis to support population inferences or claims of representativeness when using online survey panels recruited by nonprobability methods. Nonetheless, in an 81-page report issued in March after nearly 18 months of study, the task force also recognizes that samples drawn from nonprobability panels can be valuable for other kinds of research and hypothesis testing, as long as one of the goals is not inference to a larger population.
AAPOR’s Online Panel Task Force report also notes the wide variability in panel characteristics across the industry and offers some suggestions for selecting a company to work with. And it identifies a number of areas in which AAPOR might do additional work such as development of better metrics and rates, disclosure standards and updated guidelines and best practices.
The Executive Council established the task force in September 2008 at the behest of the AAPOR Standards Committee and charged it with “reviewing the current empirical findings related to opt-in online panels utilized for data collection and developing recommendations for AAPOR members.” The Council further specified that the charge did not include development of best practices, but rather would “provide key information and recommendations about whether and when opt-in panels might be best utilized and how best to judge their quality.”
Early on the task force agreed to focus on online panels recruited by nonprobability methods. These panels dominate online research and given their departure from traditional sampling methods the task force felt their methods were most in need of evaluation. The report describes the different approaches used to recruit, manage and sample from online panels and evaluates their methods from a total survey error perspective. It includes an extensive, although not necessarily exhaustive, review of the relevant literature and takes note of recent attempts by professional and industry associations to provide guidelines for their use.
The task force consisted of Reg Baker, Stephen Blumberg, Mike Brick, Mick Couper, Melanie Courtright, Mike Dennis, Don Dillman, Marty Frankel, Philip Garland, Bob Groves, Courtney Kennedy, Jon Krosnick, Sunghee Lee, Paul Lavrakas, Michael Link, Linda Piekarski, Kumar Rau, Doug Rivers, Randall Thomas and Dan Zahs.
A panel session at the annual conference will give attendees an opportunity to discuss the report and its findings with task force members.