Conclusions

Previous: Costs in RDD
Cell Phone Surveys

 Table of Contents

Next: References and
Additional Readings
 

CONCLUSIONS AND DISCLOSURE RECOMMENDATIONS

In the past two years since AAPOR issued the first Cell Phone Survey Task Force report in 2008, a great deal has been learned about cell phone surveys in the United States. The 2010 edition of the report has incorporated what the Task Force members believe to be the key implications from the new research and from the new lessons and experiences researchers have gained while conducting U.S. cell phone surveys since 2007. Nevertheless there is a great deal that remains to be learned before researchers can proceed with complete confidence in making many important decisions about how to design and implement telephone surveys in the U.S., especially for surveys that strive to accurately measure the behaviors, experiences, cognitions, perceptions and/or attitudes of the general public.

In terms of Sampling and Coverage, good RDD cell phone samples are available for researchers to use. The cell phone RDD frame has been demonstrated consistently to provide better coverage of a number of important demographic groups in the U.S. than the landline RDD frame. However, to date, cell phone RDD samples for the U.S. are not as efficient as landline RDD samples, for many reasons. Possibly the most basic decision that researchers need to make is whether they will use a cell phone sample to supplement a landline sample and if so whether the dual frame design will be overlapping (with no screening for telephone service and usage) or nonoverlapping (e.g., screening the cell phone sample for cell phone only persons/households). The Task Force believes at this time that neither of these two basic dual frame designs is always the preferred one for researchers to choose. That may change in the next few years, but for now researchers need to think very carefully about how to best balance the many issues and implications associated with their sampling design decisions.

In terms of Nonresponse, cell phone response rates trend somewhat lower than comparable landline response rates, but the size of the gap between the rates for the two frames is closing. This is thought to be due to landline response rates continuing to drop faster than cell phone response rates. Research needs to be conducted to more fully understand the size and nature of differential nonresponse in dual frame telephone surveys and the possible bias this may be adding to survey estimates. Future research needs also to seek a better understanding of how dual service users (those with both a cell phone and a landline) can best be contacted and successfully interviewed via telephone.

In terms of Measurement, to date there is no compelling evidence that data gathered via a cell phone is consistently of lower quality than that gathered via landline. But the Task Force recommends that researchers continue to be vigilant in studying possible differences, because research to date that has found such differences trends in the direction of slightly lower quality resulting from the cell phone respondents under various circumstances. Furthermore, there are many logical reasons to anticipate that there are factors that threaten the quality of some of the data gathered from cell phone respondents, especially when they are interviewed away from home and/or when they are engaged in other distracting behaviors while being interviewed.

In terms of Weighting, there is no single approach to weighting dual frame surveys that the Task Force advises all telephone researchers to follow. As discussed in the section on Weighting, there are many considerations researchers need to take into account when deciding how best to weight their dual frame and single frame telephone samples. The Task Force notes that weighting U.S. national telephone surveys is likely to be a less complex and more effective process than weighting non-national surveys due in part to the limited range of variables for which accurate populations parameters exist at non-national levels. The Task Forces urges all researchers to be forthcoming in disclosing the decisions they make about weighting their telephone samples, including possibly deciding not to weight.

In terms of Legal and Ethical Issues, the Task Force affirms that U.S. cell phone numbers should be manually dialed unless a survey organization has gained expressed prior consent from the cell phone owner. The Task Force also encourages researchers to carefully consider various ethical implications related to respondent safety and privacy, the number and frequency of callbacks, and remuneration that may be offered to cell phone respondents.

In terms of Operations, survey firms are urged to review all production systems that are used to call and gather data from respondents reached on a cell phone. This includes (1) the scripts that are used to screen respondents for various forms of eligibility, (2) how interviewers are trained to gain initial cooperation so as to screen those reached on their cell phone, (3) how interviewers are trained to gain cooperation from eligible cell phone respondents, (4) how interviewers are trained to gather data from cell phone respondents, and (5) how interviewers are assigned to cell phone samples. Survey organizations without adequate experience conducting cell phone surveys should recognize the need to carefully plan how their interviewers who will work cell phone samples are trained and assigned.

In terms of Costs, cell phone completions generally have been found to be approximately twice as expensive as otherwise comparable landline completions, and sometimes they are upwards of four times as expensive, e.g., when nonoverlapping dual frame designs are required, especially those that are non-national in scope. There are many reasons that these additional costs are incurred, as discussed in the text of the report. Researchers are urged to think carefully about the true “costs” of the decisions they make about their sampling design and how they divide the number of final completions a survey will achieve between the cell phone and landline frames. This includes cost implications of a dual frame survey’s weighting, design effects and effective sample size.

 

Disclosure Recommendations

As can be seen in this report, despite the new information that has been learned about how best to conduct good quality and cost-efficient cell phone surveys in the U.S. in the past two years, a good deal of very important new research remains to be conducted before telephone survey researchers can conduct RDD surveys of persons reached on their cell phones with full confidence in the findings that reasonably is expected by the users of those data.

In light of this, there are few recommendations the Task Force believes can be made with confidence at this time. However, as a result of the developments discussed in this report and as an important step in applying survey methods to cell phones in the U.S., the Task Force recommends the following disclosure-related recommendations:

  1. All telephone surveys should disclose whether or not the sample includes only landline numbers, only cell phone numbers, or both, and how the numbers were selected from their respective frames.

RDD surveys without a cell phone augmentation should include in their methods report and in the survey information that accompanies published findings (i.e., fielding date, response rates, margin of sampling error, etc.) that “persons residing in households with no landline telephone are not included in the results.” If researchers believe that they have produced unbiased estimates without the cell phone only segment, this belief and the reason for it should be directly discussed in the report of findings, because the topic is no longer ignorable and should not be lightly dismissed.1

  1. All RDD telephone surveys with samples that contain cell phone numbers should fully disclose how any weights have been constructed and what population/universe estimates have been used to post-stratify, recognizing that many such parameters are not available at subnational levels and may not be very accurate even when estimates are available.
  2. RDD telephone surveys targeting subgroups in the U.S. with substantial percentages of adults who live in cell phone only households (e.g., 18- to 29-year-olds; renters; and those below the poverty threshold) should sample cell phone numbers or, if this is not feasible, discuss how excluding cell phone numbers may affect the results. 

These recommendations further two goals already explicit in AAPOR’s Standards and Best Practices for Survey Research – (a) selecting samples that well represent the population to be studied and (b) disclosing all methods in order to promote evaluation and replication. These recommendations also are fully consistent with AAPOR’s Transparency Initiative. Furthermore, the Task Force believes that adhering to these disclosure standards will aid in the interpretation of RDD telephone survey results in the U.S., both in general and during the 2010 election cycle.

Previous: Costs in RDD
Cell Phone Surveys

 Table of Contents

Next: References and
Additional Readings


1 Post-stratification weighting in landline RDD surveys, while more stressed for young adults, minorities, low income groups, etc., may correct the demographic picture for the absence of cell-only groups for the subject under study but may not redress what is potentially an unknown bias. When a landline only RDD survey is being proposed, the cell phone only population that will be excluded should be described as much as possible to evaluate the impact on generalizing the survey findings. Just as nonresponse is better understood with a follow-up nonresponse study, perhaps some level of cell phone only study should also be proposed for what it might reveal or suggest (assuming a full cell phone only compliment is not affordable for the main study).  Clearly, with nonresponse issues and cell phone only issues co-existing in landline RDD surveys, to examine these with even modest nonresponse and cell phone only studies will increase costs. This means that the cost of landline RDD surveys that cannot afford to add a full cell phone frame component can only go up if researchers want some minimal insight into nonresponse and noncoverage to evaluate the quality of any landline only RDD survey.