AAPOR
The leading association
of public opinion and
survey research professionals
American Association for Public Opinion Research

Background

Previous: Executive Summary

 Table of Contents

Next: Coverage and Sampling for RDD Cell Phone Surveys

The reliability and validity of random digit dial (RDD) landline telephone surveying in the United States has been threatened in the past two decades by concerns about possible nonresponse bias. Furthermore, it has been threatened in the past decade by concerns about possible noncoverage bias linked to a growing number of households giving up their landline telephone and embracing a cell phone only (also called "wireless only") lifestyle. 

To address the latter concern, researchers in the U.S. during the last eight years began to explore the promise and challenges of surveying persons reached via their cell phone number. On the positive side, as shown in Table 1, experience has revealed that a markedly different demographic mix of respondents can be interviewed when sampling the cell phone RDD frame compared to when sampling the landline RDD frame. In particular, the elusive young adult cohort in most landline RDD surveys is relatively easy to find and interview in cell phone RDD surveys. In addition, as also shown in Table 1, RDD cell phone surveys interview appreciably more minorities (blacks and Hispanics) and men than do RDD landline surveys. One of the many advantages this brings is unweighted samples that more closely match general population parameters when RDD cell phone completions are combined with RDD landline completions before substantive analyses are undertaken.

In theory, calling cell phones increases the chances of making contact with a sampled respondent, as contacts are no longer limited to those times when people are in their homes. Furthermore, a portion of previous non-telephone households in the U.S. are now using inexpensive cell phones on occasion. These heretofore unreachable households/persons in RDD landline surveys might now be reachable via an RDD cell phone survey.

The past decade has shown that as proportionally more people integrate the use of a cell phone into their daily lives, proportionally fewer people are reachable via a traditional landline telephone. This has further eroded the coverage of the general population that can be interviewed via the RDD landline frame. In turn, this has made use of the RDD cell phone frame increasingly more attractive (and necessary) for telephone survey researchers in the U.S.

In the past two years, there has been a noticeable shift away from landline-only RDD sampling to dual frame RDD designs in which both a landline frame and cell phone frame are used.

Table 1
Unweighted Respondent Demographics by Type of RDD Telephone Frame

 

Pew Research Center

The Associated Press

Demographics

Landline

Cell Phone

Landline

Cell Phone

Sex

 

 

 

 

    Male

45%

57%

43%

59%

    Female

55%

43%

57%

41%

 

 

 

 

 

Age

 

 

 

 

    18-34 years

13%

39%

11%

36%

    35-64 years

56%

50%

61%

54%

    65 years+

30%

10%

28%

10%

 

 

 

 

 

Race

 

 

 

 

    White

79%

67%

81%

74%

    Black

8%

12%

7%

9%

    Hispanic

6%

11%

6%

10%

    Other

7%

10%

6%

7%

 

 

 

 

 

Education

 

 

 

 

    No College

37%

36%

26%

25%

    Some College

25%

28%

28%

30%

    College Grad

38%

35%

46%

45%

 

 

 

 

 

Sample Sizes

18,493

6,670

15,438

4,577

Note. The AP surveys were conducted in 2009 and 2010 by GfK-Roper. The Pew surveys were conducted in 2008 - 2010. Some of the Pew surveys were conducted by Abt SRBI and the others by PRSAI. 

By 2010, these dual frame RDD designs had become the accepted approach to conducting a general population survey in the U.S. via telephone. Thus, it is imperative that the survey research community identify the most cost-effective ways to conduct dual telephone frame surveys and to do so in ways that provide confidence in the data that are gathered and minimize both coverage and nonresponse bias in the findings that are generated.

But unlike the case with most of the rest of the world, cell phone surveying in the U.S. presents researchers with many challenges to address if valid and reliable findings are to result. To that end, this report aims to help researchers who are conducting telephone surveys in the U.S. to understand the many issues and make informed decisions regarding cell phone surveys, especially those that are to be combined with a landline survey. 

 

Prior History, the 2009-2010 AAPOR Cell Phone Task Force, and This Report

A volunteer AAPOR Cell Phone Task Force was established by the AAPOR Council in 2009 to revise and update the 2008 AAPOR report. The 2010 version is intended to provide survey researchers with information that should be considered when planning and implementing telephone surveys with respondents who are reached via cell phone numbers in the United States. This report is specific to the United States because the telecommunication regulatory and business environment that affects cell phone ownership and usage in the U.S. is quite different from that found in most other countries. 

This report addresses the many issues that apply primary to RDD cell phone surveys. However some of the topics discussed also apply to all telephone surveys in the U.S. that reach a respondent on a cell phone device by design or otherwise.

Prior to working together on the 2009 - 2010 Task Force, 14 of the 21 members had worked together on the 2007 - 2008 AAPOR Task Force that issued the AAPOR cell phone surveying report in 2008. In addition, 10 of the members had worked together as far back as 2002 on prior initiatives concerning cell phones and telephone survey research in the U.S. In 2003, many of them were part of a group of approximately 25 academic, government, and commercial telephone survey experts who met for a two-day Cell Phone Sampling Summit in New York City, which was organized and sponsored by Nielsen Media Research. At this first summit, a wide range of methodological and statistical issues related to cell phone surveying were discussed and many knowledge gaps identified. Following the 2003 summit, and with the generous support of the U.S. Chief Demographer, Chester E. Bowie, a series of questions were added to a 2004 Current Population Survey supplement to gather national data on the types of telephone services that households use. In 2005, the second two-day Cell Phone Sampling Summitwas organized by Nielsen with a slightly larger group of U.S. telephone survey sampling experts attending.1 At that second summit it was decided that the next meeting to address cell phone surveying in the U.S. should be open to all interested survey researchers. This was further discussed at the January 2006 Telephone Survey Methods IIconference in Miami. Planning for the open meeting ensued shortly thereafter. What resulted was a three-day mini-conference within the larger 2007 AAPOR conference in Anaheim.2 The mini-conference included a half-day short course on cell phone surveys, followed by seven consecutive paper and discussion sessions over the next two days. All of these meetings were extremely well attended. In addition, AAPOR Council approved the creation of a special issue of Public Opinion Quarterly(Volume 71, Number 5, 2007: Cell Phone Numbers and Telephone Surveying in the U.S.), that was published in December 2007.3 Many of the members of the Task Force helped to conduct blind reviews of articles submitted to the special issue and/or contributed to the articles published in the special issue.

In approaching the charge given to it by AAPOR's Executive Council, and similar to the decision of the 2007 - 2008 Task Force, the 2009 - 2010 Task Force decided it was still premature to try to establish "standards" the various methodological, statistical and operational issues. The Task Force thought it was too soon in the history of surveying respondents in the U.S. reached via cell phone numbers to know with confidence what should and should not be regarded as a "best practice." Nonetheless, it was recognized that a great deal had been learned during the past two years by those thinking about and conducting cell phone surveys in the U.S. The Task Force agreed fully that it was time for AAPOR to release updated information such as that contained in this report that identifies a wide range of "guidelines" and "considerations" about cell phone surveying in the U.S.  

As part of the process of creating this report, the Task Force met several times via telephone conference calls from June 2009 through June 2010 and established seven working subcommittees to address each of the following interrelated subject areas:

  • Coverage and Sampling (L. Piekarski, Chair)
  • Nonresponse (C. Steeh, Chair)
  • Measurement (S. Keeter, Chair)
  • Weighting (J. Hall, Chair)
  • Legal and Ethical Issues (H. Fienberg, Chair)
  • Operational Issues (A. Fleeman-Elhini, Chair)
  • Costs (T. Guterbock, Chair)


Each of the subcommittees created a first draft of their section, which was vetted by a meeting of the full Task Force in January 2010. Those sections were further revised and were reviewed by the full Task Force in April 2010. With the 2010 AAPOR conference held in May, the Task Force decided to attend presentations related to cell phone surveying and then meet after the conference to determine what in the report should be further updated or revised. The subsections were reviewed and revised in light of the new research presented at the 2010 AAPOR conference and a version of the Task Force report was sent to the AAPOR Council in July 2010. The report was voted on and approved at the AAPOR Council meeting on September 16, 2010.

Cell phone numbers can enter into telephone samples in several different ways. If the sample is selected from a list, such as members of organizations, or from telephone numbers matched to postal addresses, a researcher may not know whether the number belongs to a cell or a landline phone. Thus, list telephone samples, including those developed from address-based sampling frames most likely will be a mix of cell phone and landline numbers. In these cases, the inclusion of cell phone numbers has relatively little effect on the sampling process.4 However, when the method for selecting a telephone sample is RDD, multiple dilemmas face the researcher including whether the designated sample contains only cell phone numbers, only landline numbers or both. This report addresses these dilemmas and focuses primarily on telephone surveys using RDD samples.

 

Previous: Executive Summary 

 Table of Contents

Next: Coverage and Sampling for RDD Cell Phone Surveys 



1 See http://www.aapor.org/AAPOR_Main/media/MainSiteFiles/LavrakasShuttles_Cell_Phone_Sampling_Summitt_II_Statements.pdf

2 Considerable appreciation goes to Patricia Moy, Rob Daves, and Frank Newport for their key support of this mini-conference as members of AAPOR Council and leaders of the 2007 AAPOR conference program. 

3 Considerable appreciation goes to Peter V. Miller, editor of Public Opinion Quarterly, for his consistent and crucial support in seeking approval of this special issue from AAPOR Council.

4 In the U.S., all list samples for telephone surveys should be cleaned against cell phone and ported number databases or the researcher may inadvertently violate federal regulations if using an autodialer whenever prior consent to call a cell phone number has not been given by the cell phone owner.