The leading association
of public opinion and
survey research professionals
American Association for Public Opinion Research

Operational Issues in RDD Cell Phone Surveys

Previous: Legal and Ethical Issues
in RDD Cell Phone Surveys

 Table of Contents

Next: Costs in RDD
Cell Phone Surveys

As detailed in the Nonresponse section of this report, one problematic aspect of RDD cell phone surveys in the U.S. is their low response rates, which generally have trended below those of list-assisted RDD landline surveys. There are myriad reasons for the low response rates, such as the inability to send advanced notification to the sampled respondent, the perception of many that their cell phone is a private form of communication that strangers (e.g., an interviewer) should not be calling, the cost of minutes for the incoming call, and the potential to reach a respondent anywhere (e.g., shopping, at work, driving).

Over the last few years, many research organizations have conducted cell phone surveys in the U.S., and some of these surveys have included experimental designs to test the most effective operational methods for increasing response rates and potentially limiting nonresponse bias. In this section, seven operational topics related to the implementation of cell phone surveying are addressed: (1) calling rules/protocols, (2) call dispositions, (3) voice mail messages, (4) scheduled callbacks, (5) remuneration and incentives, (6) interviewer training, and (7) interviewer assignments to cell phone samples.

Calling Rules/Protocols

As with RDD landline surveys in the U.S., calling an RDD cell phone sample should be attempted on different days of the week and at different times of day. Cell phone surveys carried out to date have used different calling protocols. These have included calling mainly during early evenings and on weekends when most users have free service; however, some studies suggest that contact and cooperation may not be that different across the different day-parts. More research is required to determine the optimal calling pattern across different days and time slots for RDD cell phone surveying in the U.S.

Geographic Mobility. Furthermore, because cell phones can be in use in geographical areas other than where the cell phone’s area code is located (e.g., a respondent is away on business or vacation, or simply has moved to another location), calling windows may need to be modified to reduce the chances of reaching a respondent who has moved to, or is currently in a different time zone at a local time considered too early or too late for calling there. At present, it is unclear what percentage of the U.S. cell phone population may be affected by this consideration, but it is likely to increase over time.

Geographic Screening and Other Eligibility Screening Implications. As the size of the geopolitical area for the target population of an RDD telephone survey decreases, the need to geographically screen people who are reached increases. This holds for landline RDD surveys and for cell phone RDD surveys. However, for cell phone surveying, researchers face the need for geographic screening in nearly every study that is not national in scope, including those carried out at the state level. Thus in most non-national cell phone surveys some type of geographic screening will be required so as to screen out those who no longer live within the area the survey is covering.

If the area to be covered by the survey has a well-known and a well understood name (e.g., “Illinois” or “Cook County” or “Chicago”), the question that is asked of those reached on their cell phone is straight-forward, although it should be worded in a manner that does not tip off the person being spoken to as to what answer will qualify or disqualify the potential respondent to be interviewed. Thus for example, it would be better to ask, “In what county do you live,” rather than asking, “Do you live in Cook County?” The latter will yield far more false negatives (errors of omission).

If the area to be covered by the survey does not have a well-known or a well understood name (e.g., Chicago’s “Northside,” may be well known, but isn’t reliably understood), the questions that are asked of those reached on their cell phone are not straight-forward (cf. Lavrakas, 2010). Again these questions should be worded so that the person being spoken to is not tipped off as to what answers will qualify or disqualify her/him from being eligible for the survey. It is highly recommended that researchers pilot test the accuracy of their geographical screening sequence for small area cell phone surveys so as to avoid both false positives (i.e., those who answer the screening questions inaccurately and get screened in when they should have been screened out) and false negatives (i.e., those who answer the screening questions inaccurately and get screened out when they should have been screened in).

Cell phone surveys also may cause more operational challenges to a survey staff than landline surveys in that the rate of eligibles may be so low that interviewer morale suffers greatly from the frustration of having to screen out the vast majority of persons contacted. If a cell phone sample for a given survey is limited to only those who do not also have a landline even more of those reached will be screened out.

In additional to screening for geographic or telephone user type eligibility, cell phone surveys have a special responsibility to screen for age eligibility given the great number of nonadults who use a cell phone. This too can create extra operational burdens on a research call center staff.

Inbound Calls. Initial research and experience has shown that in the event of a missed call, cell phone users are more likely than landline users to attempt to recontact the number that appears on their Caller ID. As a result, researchers should consider the implications for the survey research calling center to handle such inbound calls. 

As previously noted in the Legal and Ethical Considerations section, the calling center’s phone number that displays on the cell phone’s Caller ID should be able to be redialed by the respondent. Ideally it will reach an inbound line on which an interview can be conducted.1 In turn, calling centers should be prepared and able to schedule a callback day and time if it is not possible to conduct the interview at the time of the respondent’s inbound call. Interviewers also need to be able to enter an alternative telephone number into the CATI system for recontact purposes, such as a residential or work number, if requested by the respondent. At a minimum, the call center should have a message that is played to incoming callers alerting the potential respondent that this contact was for a legitimate survey and that a callback will be made at a future time. This message also might provide additional information and motivation for the respondent about the survey.

Number of Call Attempts. Although higher response rates may be achieved by increasing the number of call attempts to cell phone respondents, the personal nature of the cell phone suggests the need for caution with this strategy, due in part to the anti-harassment issues discussed in the Legal and Ethical Issues section of this report.

To reduce the potential for overburdening (and likely harassing) the cell phone respondent pool, it is recommended that the total number of call attempts be limited to a modest number, perhaps in the range of six to 10, as compared to the greater number of attempts often used when surveying landline telephone numbers. (The length of the field period should be taken into consideration when deciding what will be the maximum number of call attempts in a cell phone survey.)

Refusal Conversions. Logic and anecdotal evidence to date suggest that refusal conversion attempts to cell phone respondents should be of a limited nature so as to reduce the potential for further agitating them. This is in large part a result of likely reaching the same respondent who previously refused rather than reaching some other member of the sampling unit (household), as often is the case when trying to convert refusals in RDD landline surveys.

However, until more research on the efficacy of refusal conversions in cell phone surveys has been reported, there is little to guide researchers on what might be an optimal procedure to follow (e.g., how long a time should pass after the refusal before a conversion is attempted?) when considering whether to try to convert initial cell phone refusals. 


Call Dispositions

Compared to the standard protocol of allowing a landline number to ring at least six times before coding it a “Ring No Answer (RNA),” experience suggests that interviewers should allow the cell phone to ring a minimum of eight times before dispositioning it as RNA. Furthermore, more often than not, it takes as many as eight rings before a voice mail initiates on a cell phone, so not waiting for the extra rings would incorrectly disposition the call as a RNA instead of the correct “voice mail” outcome, as well as precluding the interviewer from leaving a voice mail message if that is what the researchers choose to have done; (see section below on voice mail messages).

As discussed in the Nonresponse section of this report, ambiguous operator messages often make it difficult to disposition calls to cell phones in the U.S. appropriately. Possible future developments, such as industry consolidation, may help to reduce these types of unresolved calls in U.S. RDD cell phone surveys. In the meantime, it is recommended that research organizations that call cell phones maintain a database of these messages and how the call was dispositioned (e.g., disconnect, voice mail, etc.). In terms of dispositioning these calls, it is recommended that the most conservative approach be taken so that the response rates are not unduly inflated. Researchers also are encouraged to work with cellular phone companies to better understand these operator messages. Moreover, researchers are encouraged to share these lists via AAPORnet or AAPOR’s Standard Definitions Committee2 or via AAPOR’s online journal, Survey Practice,3 to help the field develop standard protocols for properly dispositioning the outcomes that may result when calling cell phone numbers in the U.S.


Voice Mail Messages

Benford et al. (2010) provide experimental evidence using three national RDD surveys that leaving a message on a cell phone does not appear to improve the odds of getting a completed interview, but does improve the likelihood that a callback will occur and decreases the likelihood of a refusal. However, more research is needed on this topic as Peytchev and Krotki (2010) found that voice mail messages had no discernable impact on survey performance rates.

Leaving a voice mail message on the first call attempt to a cell phone can, in theory, act as the important pre-alert of the survey request if no one is reached by the interviewer. This may be particularly important given that mailing addresses are not currently available for matching to U.S. cell phone numbers, thereby preventing the use of mailed advance contact letters, which is possible with RDD landline surveys.

In addition, researchers are encouraged to include a callback number in this message, especially if the outbound number that appears on the cell phone’s Caller ID is not valid for an inbound callback. Much more research is needed to understand what the content of messages left to voice mail should be within a cell phone survey and also how often it is prudent to leave such messages. However, it is not recommended that a message is left every time an interviewer reaches someone’s voice mail, but it is thought to be useful to have such messages occasionally left on subsequent contacts with voice mail. 

Scheduled Callbacks

Several instances may occur that require the ability to schedule callbacks at a later date/time as well as to record a different telephone number on which to reach the cell phone respondent:

First, experience shows that cell phone respondents, on average, are more likely to be under time constraints than when an interviewer is reaching someone on a landline. Furthermore, many cell phone users in the U.S. will be incurring costs per minute which may exacerbate their desire to end the call quickly. 

Second, when asking sensitive survey questions, such as about unethical behaviors, illegal acts, or financial issues, interviewers should assess whether the respondent is in an environment conducive to providing full and honest answers. If this is not the case, interviewers should schedule a callback. (See the section of this report on Measurement for further discussion of issues of data quality when contacting respondents on a cell phone.)

The third instance is related to respondent safety. Because of the mobile nature of cell phones, a cell phone respondent may be put at risk when speaking to an interviewer, such as when driving or biking or even walking (cf. Richtel, 2010). These too will lead to occasional contacts for which the interviewer may want to schedule a callback. But as advised in the previous section on Legal and Ethical Issues, as soon as an interviewer is told that a respondent is in an unsafe situation, the call should be politely terminated and time should not be taken to schedule a specific callback. Instead, in terminating the call, researchers may want to have the interviewer say something to the effect that, “we will call you back at another time when it is better for you to speak with us.” (More discussion of respondent safety appears in the sections of this report on Nonresponse and on Legal and Ethical Issues.)


Remuneration and Incentives

Because of the cost structure of cell phone billing currently in the United States, there often may be a financial burden upon the respondent for an incoming research call – something that does not occur with a landline phone. Therefore, when appropriate, it is recommended that interviewers offer a form of remuneration to offset this cost to the respondent. Remuneration is not the same as incentives; which is of particular importance where government sponsorship of a survey is involved.4

Experience to date with cell phone surveying in the U.S. has shown that few organizations have perceived the need to offer both a contingent remuneration and a separate contingent incentive. To date, survey firms appear to use one of two approaches to handling these issues.5

In one approach, the interviewer is instructed to not mention offering a cash gift unless the respondent displays reluctance or explicitly complains about the call causing her/him to incur costs to cell phone minutes. This approach is not consistent with what is recommended to be done concerning remuneration by this Task Force; (see section on Legal and Ethical Issues).

In the second approach, the introductory script spoken by the interviewer includes an explicit mention to all respondents contacted via cell phone that a monetary amount will be sent to the respondent upon completion of the questionnaire. Some of the time the reason for this includes saying something to the effect that “this is to help offset the cost of your cell minutes.” Typical amounts being offered appear to be either $5 or $10, although higher amounts have been offered when the survey includes an especially long questionnaire. The use of the word “payment” (as in “payment for your minutes”) or the words “incentive” or “remuneration” typically are not used.

Experiments Testing the Effects of Cell Phone Remuneration and Incentives. Only a few studies have featured experimentally controlled comparisons of different remuneration and incentive conditions. 

Brick and his colleagues (2007) found that a $10 incentive significantly improved respondent participation over a $5 incentive in a 2004 national survey of cell phone households. However, the Pew Research Center (2008) found virtually no difference in the response rate between cell phone respondents offered $10 and those offered $20 in a randomized experiment. In another experiment with cell phone incentives, it was found that a $10 cash incentive achieved a higher rate of production (completes per hour) in an 18 minute citizen satisfaction survey compared to no incentive (Diop, Kim, Holmes and Guterbock, 2008.) Consistent with Brick et al. (2007), an experiment by Diop, Kermer and Guterbock (2008) found that a $10 incentive improved production over a $5 incentive – so much so that the overall total cost of cell phone interviewing actually was lower using the larger cash incentive. Finally, in a recent experimental study with cell phone only respondents using a $10 gift card as the incentive versus a no incentive control condition, no effect on response rates was observed (Oldendick and Lambries, 2010).  

Much more experimentation with the use of remuneration and incentives in cell phone surveys will be needed before researchers can be confident of the effects these may have on response rates, data quality, and/or nonresponse bias. This research should include factorial designs in which some of the conditions use both a remuneration and a contingent incentive. The experimentation also should include varying the manner in which the purpose of the remuneration and/or incentive is explained (i.e., characterized) to the respondent.

Further Operational Matters to Consider. A survey organization needs a reliable infrastructure to fulfill the sending of the promised remuneration and/or incentives to the respondent. Interviewers will need to gather information from the respondent at the end of the interview to allow the funds to be given to the respondent. Of note, experience has shown that some respondents who are told they will receive a monetary amount or other gift for participating in a cell phone survey decide not to receive the money, as they choose not to disclose their mailing or e-mail address to the interviewer, or for other reasons.


Interviewer Training

Although many researchers seem not to recognize it, interviewing respondents by cell phone is a more complex task for the interviewer than is interviewing a respondent on a landline. The calling protocols, dispositioning, eligibility requirements, and interviewing techniques may be quite different. Therefore, researchers should ensure that interviewers are properly trained to handle these interviewing requirements and have the tools (e.g., scripts, persuaders, and other protocols) at hand that are tailored/targeted to the special needs that interviewers will have when cell phone respondents are reached.

General Interviewer Training for Cell Phone Surveys. The general training that all interviewers who will work on U.S. telephone surveys receive when they first are hired might include a training module that is specific to calling cell phone numbers and reaching cell phone respondents. However, a survey calling center may decide that cell phone interviewing is only appropriate for interviewers who already have demonstrated their interviewing ability on landline samples. If so, the cell phone general training module would be administered to experienced interviewers prior to their being trained and allowed to work on a specific cell phone study.

The general training module for conducting interviews with respondents reached on a cell phone would include training about how calling protocols, call dispositions, and respondent eligibility screening are performed by the call center when conducting a cell phone survey. This part of the training is not merely a time to provide cursory information to distinguish what interviewers do differently when cell phone numbers are being processed from what is done when landline numbers are being called. Instead, the cell phone training should be treated as a separate skill set and thus deserves its own unique and separate module within the larger training interviewers receive. As part of their general cell phone training, interviewers should also come to understand and respect the need to hand dial all cell phone numbers. Thus, some of the details of the TCPA should be explained to interviewers – in particular, the ones pertaining to manually dialing cell phone numbers.

Interviewer Training for Specific Cell Phone Surveys. When interviewers receive training for a specific survey on which they will work that includes calling cell phone numbers, all the topics that are addressed in general training should be addressed again in a fashion that is tailored to the specific cell phone survey in which the interviewers will be engaged. Interviewers also should be told how they will be assigned to the cell phone and landline samples, if both types of samples are being used in a given survey.

Some examples of situations that interviewers should be trained to handle in specific cell phone surveys include:

Geographic Eligibility – for surveys that are targeted to collect data for specified geographic areas (e.g., city, county, MSA, state), screening questions and interviewer probes should be developed to ascertain whether the person reached is geographically eligible.

Age Eligibility – from the premise that cell phones are more of a personal device, it is more likely to reach children/minors directly rather than via a landline phone number. Interviewers should follow study-specific/organizational rules on probing for age and data collection from minors.

Group Housing/Other Eligibility – from the premise that cell phones extend phone coverage to respondents living in housing that is traditionally excluded from household surveys, interviewers should be trained to probe as appropriate to deploy these types of respondent eligibility rules.

Respondent Location – although some cell phone surveys may have a question to ask if the respondent believes that s/he is in a safe location to answer the survey questions (see the Legal and Ethical Issues section of this report for more information), researchers may choose to train interviewers how to probe if they believe that the respondent’s location has changed during the call. For example, if the respondent is heard getting into a vehicle and then driving away, the interviewer might ask if the respondent is currently in a safe location to answer survey questions.

As interviewers gain experience in performing cell phone surveys, feedback on what concerns arise, what situations they encounter, and how they are reacting to them should be collected by survey firms. If warranted, interviewer training should be modified to deal with these situations.


Interviewer Assignment to Cell Phone Samples

As noted previously, it can be very frustrating and debilitating for interviewers in the U.S. to work a cell phone sample. Not only are they required to hand-dial the numbers – at a minimum on the first time the number is called6 – but they often have to engage a respondent who is less than willing to talk with them. Furthermore, the screening often required in cell phone surveys disqualifies many of the people who are reached. All these factors conspire to place a special burden on interviewers who work cell phone samples that typically is not present when they work landline samples. Because of this, many survey centers have learned that it is best to rotate interviewers on and off of cell phone samples so that they do not burn out.

Another consideration about the allocation of interviewers to cell phone samples: It is not recommended that interviewers work landline samples and cell phone samples on the same survey during the same work shift. The rationale for this is that cell phone interviewing is different enough from working landline samples that it is best for an interviewer to focus on doing her/his best on one type of sample at a time within a given shift.

Of note, it is generally advisable to set up a dual frame project as two separate “studies” in the CATI system. This facilitates setting up appropriate outcome (disposition) codes for the cell phone frame and allows researchers to separately track production rates, costs, and response rates for the cell phone side and the landline side of the project. 

A final consideration: One might speculate that there could be a cost benefit to offering interviewers extra pay when they work a cell phone sample. The reasoning here is that the extra pay may make them more productive with the cell phone sample, including improving their response rates, which in turn could save on other survey costs that may offset the extra pay. However, currently there is no empirical evidence that this would in fact result.

Previous: Legal and Ethical Issues
in RDD Cell Phone Surveys

 Table of Contents

Next: Costs in RDD
Cell Phone Surveys


1 It should be noted that although 800- numbers are toll free when dialed from a landline phone, they are not free when calling from a cell phone.

2 The AAPOR Standard Definitions Committee chair, Tom W. Smith, should be contacted; smitht@norc.uchicago.edu.

3 http://surveypractice.org/

4 For more information on the distinction between remuneration and incentives, please see OMB guidance (p. 68-71) on survey design, downloaded at (11/30/2009): http://www.whitehouse.gov/sites/default/files/omb/inforeg/pmc_survey_guidance_2006.pdf

5 Gallup routinely fields cell phone samples in additional to landline samples in their national telephone surveys.  Jones (2008) reported that Gallup’s protocol is to offer neither remuneration nor incentives for cell phone respondents (or landline respondents). This policy is based in part on the American public’s familiarity with the Gallup brand.

6 Once a respondent is reached on a cell phone, the respondent may give explicit or implicit permission to be called back on the cell phone. If that happens, then the requirement that the callback to the cell phone number be hand dialed no longer holds in the U.S. However, call centers may not have the technology required to differentiate which cell phone numbers must be hand dialed and which can be dialed with an autodialer. As such, many call centers may simply have interviewers hand dial all cell phone numbers regardless of the results of any previous contact with the cell phone respondent.