AAPOR
The leading association
of public opinion and
survey research professionals
American Association for Public Opinion Research

Executive Summary

 

Table of Contents

Next: Background 

The reliability and validity of random digit dial (RDD) landline telephone surveying in the United States has been threatened in the past two decades by concerns about possible nonresponse bias. It has been further threatened in the past decade by concerns about possible noncoverage bias linked in part to a growing number of households giving up their landline telephone and embracing a cell phone only (also called “wireless only") lifestyle. 

To address the latter concern, during the last eight years researchers in the U.S. began to explore the promise and challenges of surveying persons reached via their cell phone number. On the positive side, experience has shown that a markedly different demographic mix of the general population of the U.S. can be interviewed when sampling from the cell phone RDD frame, compared to when sampling from the landline RDD frame. In particular, the highly elusive young adult cohort in most landline RDD surveys is relatively easy to find and interview in cell phone RDD surveys. But unlike the case with most of the rest of the world, cell phone surveying in the U.S. presents researchers with many challenges if valid and reliable data are to be gathered.

In 2007, a volunteer Cell Phone Task Force was established by Executive Council of the American Association for Public Opinion Research (AAPOR) to prepare a report that would provide telephone survey researchers with information that should be considered when planning and implementing telephone surveys with respondents who are reached via cell phone numbers in the United States. That report was issued by AAPOR in the spring of 2008 and identified a number of areas in which knowledge gaps about cell phone surveying existed and needed to be closed.

Since that time the survey research community has conducted many studies about different aspects of cell phone surveying in the U.S., thus advancing the state of knowledge in this field considerably. Recognizing this, AAPOR's Executive Council decided in 2009 that the Cell Phone Task Force should be reconstituted in order to update the 2008 report to reflect the new knowledge that has been gained in the past two years from (a) a number of empirical studies built into cell phone surveys and (b) the wealth of new experiences gained by cell phone telephone survey practitioners in the United States.

The current report addresses many issues that apply primarily to RDD surveys that sample cell phone numbers, either as stand-alone cell phone surveys or as part of dual frame cell phone and landline RDD surveys. However, some of the matters discussed in this report apply to all surveys in the U.S. that reach cell phone numbers.

The new report covers the same major topics addressed in the 2008 report but with considerably more detail. This new report also addresses other major topics, which could not be addressed when the first report was written because knowledge was then insufficient.

In approaching the charge given to it by AAPOR's Executive Council, the 2009-2010 Cell Phone Task Force concluded that it remains premature to try to establish "standards" on the various issues as it is too soon in the history of surveying respondents in the U.S. reached via cell phone numbers to know with great confidence what should and should not be regarded as a "best practice." Nonetheless, a great deal has been learned during the past eight years, and in particular in the past two years, by those thinking about and conducting such surveys in the U.S. The Task Force agreed fully that it was time for AAPOR to update the information contained in this report. This information identifies a wide range of "guidelines" and "considerations" about cell phone surveying in the U.S. for researchers to consider explicitly.

As part of the process of creating this report, Task Force members met several times via telephone conference calls from June 2009 through April 2010, and established working subcommittees to address each of the following interrelated seven subject areas:

  • Coverage and Sampling (Linda Piekarski, Chair)
  • Nonresponse (Charlotte Steeh, Chair)
  • Measurement (Scott Keeter, Chair)
  • Weighting (John Hall, Chair)
  • Legal and Ethical Issues (Howard Fienberg, Chair)
  • Operational Issues (Anna Fleeman-Elhini, Chair)
  • Costs (Thomas Guterbock, Chair)


What follows is a summary of each of the major sections of the report: 

Coverage and Sampling. The RDD cell phone frame extends coverage of the general population in the U.S. to many demographic groups (young adults, males, minorities, etc.) that have become woefully hard to survey via the landline RDD frame. Thus, using the cell phone frame is very good for telephone survey researchers in terms of reaching more representative unweighted samples of the general public.

However, there are many coverage and sampling issues concerning cell phone numbers and frames that researchers must understand in order to evaluate the most appropriate design for telephone surveys in the United States. This section of the report lists many considerations that should be given to the decision of what frame(s) to use when planning to interview people in RDD surveys who are reached on a cell phone or a landline. The section also discusses the critical decision that researchers need to make about whether to choose an overlapping dual frame design (with no screening of the cell phone sample based on the respondent's telephone service type and usage) or a dual frame design with screening of the cell phone sample for cell phone only status (and possibly for cell phone mostly/mainly status). At this time, the Task Force does not think it is appropriate to always recommend one of these designs over the other. Instead, either design might be the better choice based on the particulars of a given survey.

The important issue of how to integrate landline sample with a cell phone sample is also addressed in this section. All of the coverage/sampling decisions are particularly challenging when a survey is less than national in scope. 

Finally, whether RDD telephone surveys in the U.S. that sample cell phone numbers should deploy a within-unit respondent selection technique continues to remain unclear and awaits future research regarding (1) whether it needs to be done and if so, (2) when it should be done and (3) how best to do it.

Nonresponse. Nonresponse in RDD cell phone surveys is somewhat greater than in comparable RDD landline surveys in the U.S. However, as response in traditional landline RDD surveys has continued to drop, the differential between rates in RDD landline surveys and those in RDD cell phone surveys has narrowed.

Noncontacts and refusals as sources of nonresponse are somewhat more prevalent in cell phone surveys than in landline surveys with comparable numbers of callbacks. However, there are reasons to expect that the proportion of noncontacts in cell phone surveys will decrease over time. In contrast, there are formidable obstacles to addressing the challenges posed by refusals in RDD cell phone surveys that are likely to remain in the foreseeable future. For example, there are many reasons that refusal conversion attempts are less productive with RDD cell phone samples than they are with RDD landline samples. 

The accurate dispositioning of the numbers in a sample, both on a temporary basis during the field period and on a final basis at the end of the field period, is more troublesome with cell phone samples. New disposition codes are needed for cell phone surveys and some codes used for landline surveys either have no relevance or mean something different in a cell phone survey.

Cell phone RDD surveys also pose more challenges than landline RDD surveys for call centers and researchers to determine many numbers for which eligibility remains uncertain at the end of the field period. This in turn makes the calculation of response rates for cell phone surveys more complex and less reliable than with landline surveys. This section of the report presents a discussion with examples about calculating a weighted overall dual frame response rate that combines the rates from the cell phone sample with the landline sample.

The processing of cell phone samples also requires many new operational considerations that are not faced in processing landline samples, and which will further increase nonresponse if not handled well. All of these challenges related to nonresponse in U.S. cell phone surveys make them more expensive to conduct than comparable landline surveys (see Costs section). 

In terms of nonresponse bias in cell phone surveys, little is known. However, there is research that suggests that survey topics related to technology are likely to yield somewhat biased data due to differential nonresponse given the trend for those most technologically sophisticated to be more likely to agree to participate in a cell phone survey. Whether such bias can be reduced or eliminated through post-secondary weighting remains unclear. Much more research on this topic is needed in the coming years.

Measurement. There are two primary measurement issues concerning cell phone surveying. First, there is the concern about the potential for lower data quality associated with cell phone surveys. There are many reasons for this concern including factors associated with audio quality, asking about sensitive topics while a respondent is in a public place, and asking about cognitively complex topics while a respondent is multitasking. 

Despite these concerns, most of the empirical evidence to date regarding cell phone respondents does not support the broad assumption of poorer data quality compared to what landline respondents provide. That is, there is no evidence to suggest that all or even most data gathered by cell phone are of poorer quality than their landline counterparts would be. 

However, the reader is cautioned that "few significant differences" do not necessarily imply equivalence in data quality as there is some evidence to suggest that under certain circumstances, including when asking certain types of questions, concerns about cell phone data quality are not unfounded. Therefore, the Task Force believes it is advisable that researchers remain attentive to this data quality concern. Future experiment-based research (cf. Kennedy, 2010) is needed to know with confidence if, and how, data quality is affected by gathering it from a respondent on a cell phone.

Second, many new survey items may be needed for use in adjusting cell phone samples prior to analyzing their data. Examples of some of these items appear in Appendix B. However, as discussed in detail in the Weighting section of the report, the reliability and validity of these new items has not yet been established.

Weighting. This section focuses mostly on two types of RDD sampling designs: (1) non-overlapping dual frame designs and (2) overlapping dual frame designs. Weights would almost always be required if both cell and landline RDD frames are used, especially if respondents having both types of service are interviewed from both frames (i.e., the dual frame "overlapping" design without screening). However, there are a few instances when it may be permissible not to use weights. For example, weights might not be needed in a sample that uses only one frame and no attempt is made to generalize about those who could only be contacted via the other frame.

A good deal of discussion is presented in the section on steps that researchers should consider in applying weights to their cell phone and landline RDD samples in dual frame telephone surveys. Discussion also is provided about data that researchers should consider gathering from respondents to aid any post-stratification they may perform. Appendix B shows examples of questions some prominent survey organizations have used for these purposes.

However, there remain a number of important unknowns and uncertainties about the weighting needed to help improve the accuracy of RDD cell phone samples and this section of the report addresses the many questions that prudent researchers need to consider when thinking about weighting an RDD dual frame sample. This is the most complex and challenging set of knowledge gaps currently facing U.S. telephone researchers who work with data from RDD cell phone samples. Until reliable methods have been devised, tested, and refined by the survey research community, researchers will have to accept some uncertainty (and possible discomfort) regarding whether a cell phone survey data set has been made as accurate as it can be through weighting. A particularly troublesome issue here is that there is a dearth of highly accurate population parameters to use in weighting cell phone samples of regional, state and local areas.

Finally, the Task Force believes it is vitally important for researchers to disclose and clearly describe how they constructed any weights used in their analyses of cell phone survey data or to describe the basis on which they decided not to weight, if that was their decision.

Legal and Ethical Issues. Due to federal telecommunication laws and regulations in the U.S., those who conduct surveys with people who are reached on a cell phone must avoid using autodialers (including self-dialing modems and predictive dialers) to place calls, unless they have prior permission of the cell phone owner to do so. This increases the time and cost of processing RDD cell phone samples considerably. 

Presently, it is not advised that text messages be used to make advanced contact with those sampled at a cell phone number due to federal and state laws on text messaging. 

From an ethical perspective, the report addresses several cell phone related issues, including how to think about: (1) time of day for calling; (2) maximum number of callbacks and the frequency of callbacks so as to avoid harassment and avoid violating various state laws on harassment via the telephone; (3) privacy issues; (4) safety issues; (5) contacting minors; (6) the permitted use of the Neustar databases; (7) transmitting accurate Caller ID information when dialing a cell phone; and (8) keeping an internal Do Not Call list for cell phone owners who request that they not be called back.

Operational Issues. In the past few years, a great deal has been learned about many important operational issues pertaining to conducting RDD cell phone surveys of the U.S. general population. As survey organizations gain more experience conducting surveys in the U.S. with respondents reached via their cell phones, greater confidence has resulted concerning the "best" approaches for generating quality data in cell phone surveys.

This section of the report includes detailed discussion of: (1) calling rules and protocols, including how to implement various types of eligibility screening that cell phone surveys often require and the differences between refusal conversion methods in cell phone surveys versus landline surveys; (2) differences between the processing of numbers from the two survey frames when planning callbacks and how to disposition certain calling outcomes in cell phone surveys compared to landline surveys; (3) the use of messages left on voice mail; and (4) how and when to implement remuneration and/or incentives with cell phone respondents.

Almost all of these operational issues affect how interviewers are trained to conduct cell phone surveys. The Task Force believes that interviewers should receive special training before they are assigned to cell phone surveys, and that ideally an interviewer should have experience with landline surveys before being trained to work on cell phone surveys. Discussion also is presented about the assignment of interviewers to cell phone surveys so as to avoid possible burn-out from the demoralizing effects of the very low productivity that often results when trying to complete interviews in cell phone surveys.

Cost Issues. During the past few years many survey firms have gained experience with the differential costs of conducting cell phone RDD surveys compared to landline RDD surveys. Extensive discussion is provided in this section of the report about factors that lead to differential costs between cell phone and landline surveys in the U.S., including: (1) the dialing method, (2) interviewer time, (3) cost of the sample, (4) remuneration, (5) working number rate, (6) contact rate, (7) eligibility rate, (8) cooperation rate, and (9) interview length.

The Task Force also conducted what we believe is the first survey of U.S. survey organizations known to have had experience in conducting dual frame telephone surveys. Details were gathered about various cost-related factors in 38 dual frame RDD surveys. Results from the survey present the differential costs between cell phone and landline surveys and how the differential in cost is associated with factors such as whether the survey used an overlapping or nonoverlapping sampling design and whether the survey was national or non-national in scope. The findings show that the cost per completion in a U.S. RDD cell phone survey is most often at least twice that of a completion in a U.S. RDD landline survey, and under certain design conditions can be three or four times as expensive.

The Cost section ends with a discussion of the "costs" to sampling precision (as indicated by  design effects and effective sample size) when researchers make decisions about how to allocate their final dual frame sample between the cell phone or the landline frames. Appendix C provides discussion of a cost allocation model developed for the AP-GfK Poll that will help researchers think more clearly about the "costs" of the dual frame sampling designs they chose to deploy.

Recommendations. In addition to the suggestions and considerations discussed in the above sections of the report, the Task Force has made three recommendations concerning disclosure. These include the following: (1) researchers should explain the method by which the cell phone numbers used in a survey were selected, (2) if RDD telephone surveys do not sample cell phone numbers, then researchers should provide an explanation of how excluding cell phone owners might or might not affect the survey'sresults, and (3) researchers should explain the decisions that were made concerning weighting of cell phone samples, including why the sample was not weighted, if in fact that was the case.

Additional Readings and Glossary. The report lists additional readings from the large and growing research literature on RDD cell phone surveying in the U.S. and an updated glossary of terms related to cell phone surveys that may not be familiar to all readers.

Appendices. Three appendices are included to provide supplementary information about sampling, measurement, and costs:

  • Appendix A (written by Michael Link of The Nielsen Company) covers "Address-Based Sampling (ABS) as an Alternative to Sampling Cell Phone Exchanges" and explains how the ABS frame provides an alternative approach for including cell phone only households/persons in a survey.                                                                         
  • Appendix B (assembled mostly by Leah Melani Christian of the Pew Research Center) covers "Examples of Questions Used by Major Survey Organizations for the Purposes of Weighting Cell Phone Samples," and lists the wording of many survey items from six major survey organizations that have been devised and used in the past few years for gathering information about telephone service and usage in the U.S. These are the data that are often needed to help weight dual frame telephone surveys.
  • Appendix C (written by Robert Benford of GfK Custom Research North America) covers "Considerations for Sample Design, Estimates, Weighting and Costs," and provides a perspective on important implications that result when researchers decide how to apportion the total number of completions that will be achieved in dual frame telephone surveys between the landline and cell phone RDD frames.

 

 

Table of Contents

Next: Background