The leading association
of public opinion and
survey research professionals
American Association for Public Opinion Research

Measurement in RDD Cell Phone Surveys

Previous: Nonresponse in
RDD Cell Phone Surveys

 Table of Contents 

 Next: Weighting in
RDD Cell Phone Surveys


Cell phone surveys present special challenges not only in sampling, nonresponse, weighting, and administration, but also in measurement. The measurement challenges are primarily twofold:

  • The unique nature of the cell phone may affect the behavior of the respondent and the interaction between interviewer and respondent, and this may have an impact on data quality, e.g., on item nonresponse, variance and bias.
  • Additional survey items are required in both cell phone and landline questionnaires in dual frame surveys to provide necessary data for weighting and other key analyses. (This issue is addressed in the following section of this report on Weighting.)

Since AAPOR’s 2008 Cell Phone Task Force report was issued, more research on the topic of data quality in cell phone surveys has appeared. Most of these studies have found little difference in data quality between landline and cell phone interviews, after controlling for the kinds of people most likely to be interviewed via each device. The only randomized controlled experiment with so-called “dual users” that has been reported to date found few systematic differences in data quality. Nevertheless there are many potential reasons to suspect that the data quality of cell phone interviewing might be lower than in landline interviews. Until more research is conducted with larger samples that are not undermined by the effects of nonignorable nonresponse – especially research that uses an experimental design – researchers are urged to be vigilant in monitoring data quality from their cell phone interviews.


Reasons for Concern about Cell Phone Data Quality

Concerns about the quality of data gathered via cell phone interviewing relative to landline interviewing arise from several sources, including audio quality, the location of the cell phone respondents during the interview, and the other activities in which a cell phone respondent may be engaged during the interview.

Sensitive topics. There long has been concern that data quality from cell phone interviews might be lower than among comparable landline interviews (cf. Lavrakas, Steeh, Shuttles and Fienberg, 2007) if sensitive data are being gathered. The reasons for this concern are straightforward. Even though many cell phone users seem perfectly willing to carry on personal conversations in public places, some people might consciously or unconsciously limit the candor or openness, and thus the accuracy of their responses may be jeopardized relative to the sensitivity of the research questions. An example of this would be a person on a crowded bus answering questions via a cell phone for a study on sexually transmitted diseases or race-related attitudes or financial investments and income, or other very sensitive topics, who may answer those questions differently than if s/he were in a more private location (e.g., her/his own home). 

Multitasking, distraction, cognitive complexity and respondent burden. New behavioral research has suggested that speaking on a cell phone is a more cognitively complex task than originally thought (cf. Richtel, 2010). A possible reason for this is that conversations on a cell phone often require usage of not only one’s audio senses, but also one’s visual senses. This will especially be the case if one is moving (e.g., walking or driving) while speaking on a cell phone (cf. Parker, 2009). Another possible reason is that using a cell phone may allow people to engage in a wider range of multitasking activities than when using a landline, especially when away from their home. This in turn may cause the person on the cell phone to pay less attention to any one particular task (e.g., responding to a survey) compared to the other task(s) in which s/he also is engaged. As such, depending on what else people may be doing while they are being interviewed on a cell phone, they may be less likely to provide accurate data on cognitively demanding questions that require a greater than average use of one’s memory and/or other advanced thinking processes.

Audio quality. Also important, the volume and quality of voices on cell phones may make it difficult for respondents (especially those with hearing difficulties) to clearly hear and comprehend all questions, and for interviewers to clearly hear and comprehend all answers, especially when respondents are reached in either noisy locations and/or places with a poor cellular transmission signal.

Rushing to complete the conversation and breakoffs. Low sound quality coupled with high potential for distraction might also cause cell phone interviews to take longer than comparable landline interviews. Alternatively, concerns about cost and inconvenience might lead some respondents to hurry through the interview, which could mean that they do not consider their responses as carefully as they would if they were on a landline. Distractions related to being interviewed outside of the home and timing concerns also might lead to higher levels of breakoffs or interrupted interviews, both of which can have negative effects on data quality.


Existing Research on Cell Phone Data Quality

Despite these potential concerns, the growing body of research suggests that there is little difference in data quality between cell phone and landline interviews for many types of questioning. Most of this research entails comparisons of respondents interviewed by cell phone and those interviewed by landline in the same study. Because there are demographic differences in the kinds of people most likely to complete a survey by cell phone or by landline (e.g., a greater percentage of cell phone respondents are young, male, nonwhite, and/or renters), some apparent differences in data quality may be spurious, if factors such as these are not controlled. Most of the research reviewed for this report attempted to control for differences in the composition of the cell phone and landline samples.

The strongest evidence to date is from Kennedy’s (2010) experiment, which randomly assigned dual-user respondents from an initial interview (those who have both a cell phone and a landline) to a follow-up interview on either a cell phone or a landline. Respondents did not know that there was an experiment being carried out and that the effect of the type of telephone service (cell phone or landline) they were contacted on was being studied. This research somewhat confirms previous findings and yet it cautions against overgeneralizing. With respect to cognitive shortcuts – those adopted by respondents to avoid having to think through alternatives or search memory for appropriate responses – four of seven tests yielded no differences between cell phone and landline respondents, two yielded weak effects suggesting lower data quality on cell phones, and one produced clear evidence of short-cutting by cell phone respondents, which is a potentially serious quality issue. Furthermore, in terms of substantive responses, Kennedy found that when cell phone respondents were interviewed away from home, they rated their social lives as significantly better than when the same respondents were interviewed at home, and rated the condition of roads as significantly worse. Since there is no objective way to determine which responses are more accurate, it is impossible to characterize one mode or the other as more susceptible to measurement bias on these topics. Other than these intriguing results, no other comparisons in the study yielded significant differences in data quality. Even the respondents’ assessments of audio quality did not differ significantly between the cell phone and landline interviews. Overall, Kennedy’s research suggests that although differences in data quality between landline and cell phone interviews may exist, they often tend to be modest in size and somewhat limited in scope. (However, the sample sizes in this first experimental study on cell phone vs. landline data quality were not large and thus the reader is cautioned not to overinterpret these findings.)

Other recent research produced similar results on various measurement dimensions:

  • Witt, ZuWallack and Conrey (2009) found little mode difference between cell and landline interviewing in item nonresponse or richness of response to open-ended questions, even after controlling for demographic variables.
  • In a large national dual frame study, Brick et al. (2006) found no differences in terms of missing data, in the length of open-ended responses, or in responses to four sensitive questions among those who were interviewed using their cell phone compared to those interviewed via their landline.
  • The Pew Research Center’s 2006 study found no significant differences between cell phone and landline interviews in interviewer assessments of whether respondents were distracted or doing other things while also responding to the interview (Pew, 2006); there also were no significant differences in levels of item nonresponse.
  • Kennedy’s (2007) analysis of response order effects and straight-lining in dual frame studies conducted by the Pew Research Center also found no conclusive evidence of measurement quality differences between landline and cell phone samples. However, there were some marginally significant findings associated with a recency effect when cell phone respondents were read a list of candidates compared to landline respondents hearing the same list. This trend was particularly associated with cell phone respondents aged 40 years and older.
  • Earlier research on data quality by Steeh (2004) found few differences between cell phone and landline interviews in the amount of item nonresponse, strength of theoretically meaningful correlations among items, and overall distributions when demographic differences between the samples were controlled. The data provided by respondents using cell phones did not significantly differ from those of respondents using landline phones, when comparing the same demographic groups, such as within age and race cohorts.
  • Similarly, research conducted in the last decade by Statistics Sweden (Kuusela, Callegaro Vehovar, 2007) did not show any significant difference in data quality when comparing interviews done on a landline to interviews done on a cell phone.

However, a 2010 study using data from nine large national surveys and focused solely on cell phone respondents addressed the questions of whether (a) those interviewed at home versus elsewhere and (b) those interviewed while engaging in potentially distracting other behavior (e.g., driving, talking to someone else, reading, writing, playing game, texting, working on a computer, etc.) versus those not engaged in any distracting other behavior differed in the quality of responses (Lavrakas, Tompson and Benford 2010). Similar to results reported by Kennedy (2009) and Brick (2007), they found that one third (32 percent) of cell phone respondents were interviewed away from their homes, confirming that a majority of cell phone interviews take place in locations similar to those of landline interviews, thereby lessening the opportunity for outside distractions and other influences on data quality. They also found that one-sixth (16  percent) of cell phone respondents were engaged in what was judged to be a highly distracting other behavior while being interviewed.

Although total item nonresponse and item nonresponse to sensitive questions were higher among cell phone respondents interviewed away from home, neither difference was significant when tested in a multivariate analysis controlling for other variables. This study also found no differences in the strength of theoretically meaningful correlations depending on whether a cell phone respondent was interviewed at home or elsewhere. Nor was there any difference in the prevalence of “straight-lining” (i.e., the lack of variance) when asked a series of questions using a similar response format (e.g., as is done with many multi-item scales) and whether the interviewee was at home or elsewhere.

In terms of the data quality gathered from cell phone respondents engaged in at least one highly distracting other behavior while they were being interviewed, there were no significant effects associated with total amount of nonresponse, the amount of theoretically meaningful correlations or the prevalence of straight-lining. However there was a marginally significant greater amount of item nonresponse to sensitive questions among the cell phone respondents engaged in distracting other behaviors while being interviewed.


Few Differences in Data Quality, but a Cautious Approach Should Continue to Prevail

In sum, most of the empirical evidence to date regarding cell phone respondents does not support the broad assumption of poorer data quality. That is, there is no evidence to suggest that all or even most data gathered by cell phone are of poorer quality than their landline counterpart would be.

However, the reader is cautioned that “few significant differences” do not necessarily imply equivalence in data quality as there is evidence to suggest that under certain circumstances, including when asking certain types of questions, concerns about cell phone data quality are not unfounded. But data quality remains an understudied area in the cell phone survey literature. Kennedy’s (2010) study is the first reported randomized experiment to study the issue of cell phone data quality, and even that study is somewhat limited due to its relatively small sample size.

Much more research is needed on cell phone survey data quality, including more research into the possible effects of respondent multitasking while participating in a cell phone interview. In the meantime, it is prudent for researchers to train their interviewers to be alert to whether a respondent on a cell phone is in an environment and/or is engaging in other activities that are not likely to be conducive to providing full and accurate answers to the questions the interviewer is asking. (See the Operational Issues section of this report for more discussion of this topic.)

Furthermore, as part of this cautionary approach to more fully understanding possible measurement errors in cell phone surveys, researchers are encouraged to at least ask cell phone respondents whether or not they have been reached at home and possibly about other activities they may be involved in while doing the interview. This would advance knowledge in the field about whether data quality differences may be associated with the in-home/out-of-home dichotomy and/or with multitasking. 

At the same time, it is important to note that most landline surveys do not include measures of the degree of privacy or the amount of distractions under which landline interviews are conducted, so concerns about data quality in cell phone interviews should be considered as a special issue within the broader concern of data quality in all telephone surveying.

Previous: Nonresponse in
RDD Cell Phone Surveys

 Table of Contents

 Next: Weighting in
RDD Cell Phone Surveys