ePrivacy and GPDR Cookie Consent by Cookie Consent
The leading association
of public opinion and
survey research professionals
American Association for Public Opinion Research

Cell Phone Task Force Report 2008

April 2008


Guidelines and Considerations for Survey Researchers When Planning and Conducting RDD and Other Telephone Surveys in the U.S. With Respondents Reached via Cell Phone Numbers

Prepared for AAPOR Council by a Task Force operating under the auspices of the AAPOR Standards Committee, with members including:
Paul J. Lavrakas, Task Force Chair Charlotte Steeh, AAPOR Standards Committee Chair
Stephen Blumberg, U.S. Centers for Disease Control and Prevention John Boyle, Abt SRBI Inc.
Michael Brick, Westat
Mario Callegaro, Knowledge Networks Howard Fienberg, CMOR
Anna Fleeman, Arbitron Donna Gillin, CMOR
John Hall, Mathematica Policy Research, Inc.
Scott Keeter, Pew Research Center Courtney Kennedy, U. of Michigan Michael Link, The Nielsen Company
Linda Piekarski, Survey Sampling International Chuck Shuttles, The Nielsen Company
Trevor Thomson, Associated Press


Executive Summary
Considerations Regarding Coverage and Sampling in RDD Surveys
Considerations Regarding Nonresponse in Cell Phone Surveys
Considerations Regarding Legal and Ethical Issues in Cell Phone Surveys
Considerations Regarding Measurement in Cell Phone Surveys
Considerations Regarding Weighting in RDD Cell Phone Surveys
Best Practices Recommendations
References and Additional Readings


A volunteer Task Force was established by AAPOR Council shortly after the May 2007 AAPOR conference to prepare a report that would provide survey researchers with information that should be considered when planning and implementing telephone surveys with respondents who are reached via cell phone numbers in the United States. This report is specific to the U.S. because the telecommunication regulatory and business environment that affects cell phone ownership and usage in this country is different than that found in most other countries. This report addresses a great many issues that apply primarily to RDD surveys that sample cell phone numbers, although some of the matters discussed apply to all telephone surveys in the U.S. that reach cell phone numbers.

In approaching the charge given to it by AAPOR’s Executive Council, the Task Force decided that it was premature to try to establish “standards” on the issues as it is too soon in the history of surveying respondents in the U.S. reached via cell phone numbers to know with confidence what should and should not be regarded as a “Best Practice.”  Nonetheless, a great deal has been learned during the past six years by those thinking about and conducting such surveys in the U.S. and the Task Force agreed fully that it is time for AAPOR to release information, such as that contained in this report, that identifies a wide range of “guidelines” and “considerations” about cell phone surveying in the U.S. Of note, ongoing research has demonstrated that conducting survey interviews by cell phone in the U.S. is feasible, if also somewhat more difficult and more expensive than conducting similar interviews on landline telephones.

As part of the process of creating this report, Task Force members met several times via telephone conference calls from June 2007 through January 2008, and established working committees to address each of the following interrelated subject areas:

1.CoverageandSampling (L.Piekarski,Chair)
2.Nonresponse (A.Fleeman,Chair)
3.LegalandEthicalIssues (H.Fienberg,Chair)
4.Measurement (S.Keeter,Chair)
5.Weighting (J.Hall,Chair)
What follows is a summary of each section of the report:
Coverage and Sampling. There are many issues concerning cell phone numbers and sampling frames that researchers must understand in order to evaluate the most appropriate design for telephone surveys in the United States. This section lists several considerations that should be given to the decision of what frame to use when planning to interview people in RDD surveys who are reached on a cell phone.  This can be particularly challenging when a survey is less than national in scope. Special care must be taken in deciding how to handle respondents who can be reached via both a cell phone RDD frame and a landline RDD frame. Whether    RDD telephone surveys in the U.S. that sample cell phone numbers will need to deploy a within-unit respondent selection technique remains unclear and awaits future research regarding (a) whether it needs to be done and if so, (b) when it should be done and (c) how best to do it.

Nonresponse. Nonresponse in RDD cell phone surveys is somewhat greater than in comparable RDD landline surveys in the U.S. Noncontacts and refusals as sources of nonresponse are more somewhat prevalent in cell phone surveys than in landline surveys with comparable numbers of callbacks. However, there are reasons to expect that the proportion of noncontacts in cell phone surveys will decrease over time. In contrast, there are formidable obstacles to addressing the challenges posed by refusals in RDD cell phone surveys that are likely to remain in the foreseeable future. For example, there are many reasons that refusal conversion attempts are less productive with RDD cell phone samples than they are with RDD landline samples. The accurate dispositioning of the numbers in a sample, both on a temporary basis during the field period and on a final basis at the end of the field period, is more troublesome with cell phone samples. This in turn makes the calculation of response rates for cell phone surveys more complex and less reliable than with landline surveys. The processing of cell phone samples also requires many new operational considerations that are not faced in processing landline samples, and which further will raise nonresponse if not handled well. All of these challenges related to nonresponse in U.S. cell phone surveys make them significantly more expensive to conduct than comparable landline surveys.

Legal and Ethical Issues. Due to federal telecommunication laws and regulations in the U.S., those who conduct surveys with people who are reached on a cell phone must avoid using autodialers (including self-dialing modems and predictive dialers) to place calls, unless they  have prior permission of the cell phone owner to do so. This increases the time and cost of processing RDD cell phone samples considerably. Presently, it is not advised that text messages be used to make advanced contact with those sampled at a cell phone number due to federal and state laws on text messaging. From an ethical perspective, the report addresses several cell phone related issues, including how to think about (a) time of day for calling, (b) maximum number of callbacks to attempt and the frequency of callbacks, (c) privacy issues, (d) safety issues, and (e) contacting minors.

Measurement. There are two primary measurement issues concerning cell phone surveying. First, there currently is no reliable evidence that the data gathered in good cell phone surveys are of lower quality than in comparable landline surveys. However, the Task Force believes it is advisable that researchers remain attentive to this concern. Future research is needed to know with confidence if, and how, data quality is affected by gathering it from a respondent on a cell phone. Second, many new survey items may be needed for use in adjusting cell phone samples prior to analyzing their data. As discussed in detail in the Weighting Section of the report, the reliability and validity of these new items has not yet been established.
Weighting. There remain a myriad of important unknowns and uncertainties about the weighting needed to help improve the accuracy of RDD cell phone samples. This section of the report addresses questions that prudent researchers need to consider when thinking about how to weight their RDD cell phone samples. This is the most complex and challenging set of knowledge gaps currently facing U.S. telephone researchers who work with data from RDD cell phone samples. Until reliable methods have been devised, tested, and refined by the survey research community, we will have to accept living with considerable uncertainty and discomfort regarding whether a cell phone survey data set has been made as accurate as it can be through weighting. A particularly troublesome issue here is that there are no reliable population parameters to weight cell phone samples of regional, state, and local areas, as opposed to the nation as a whole. Further troubling is that there are, to our knowledge, no current plans to remedy this information shortfall by gathering such data.

Recommendations. In addition to these sections, the Task Force has made three recommendations concerning disclosure. These include the following: (a) researchers should explain the method by which the cell phone numbers used in a survey were selected, (b) if RDD telephone surveys do not sample cell phone numbers, then researchers should provide an explanation of how excluding cell phone owners might affect the survey’s results, and (c) researchers should explain the decisions that were made concerning weighting of cell phone samples, including why the sample was not weighted if in fact that was the case.

Additional Readings and Glossary. The report ends with additional readings from the small but growing research literature on RDD cell phone surveying in the U.S. and a glossary of terms related to cell phone surveys that may not be familiar to all readers.


A volunteer AAPOR Task Force was established by AAPOR Council shortly after the May 2007 AAPOR conference to prepare a report that would provide survey researchers with information that should be considered when planning and implementing telephone surveys with respondents who are reached via cell phone numbers in the United States. This report is specific to the United States because the telecommunication regulatory and business environment that affects cell phone ownership and usage in this country is quite different than that found in most other countries. This report addresses a great many issues that apply primary to RDD surveys that sample cell phone numbers, although some of the matters discussed apply to all telephone surveys in the U.S. that reach cell phone numbers.

Prior to working together on the Task Force, 12 of the 16 members had worked together as far back as 2002 on prior initiatives concerning cell phones and telephone survey research in the U.S. In 2003, many of them were part of a group of approximately 25 academic, government, and commercial telephone survey experts who met for a two-day Cell Phone Sampling Summit in New York City which was organized and sponsored by Nielsen Media Research. At this first summit, a wide range of methodological and statistical issues related to cell phone surveying were discussed, with many knowledge gaps identified. Following the 2003 summit, and with the generous support of U.S. Chief Demographer, Chester E. Bowie, a series of questions were added to a 2004 Current Population Survey supplement to gather national data on the types of telephone services that households use. In 2005, the second two-day Cell Phone Sampling Summit was organized by Nielsen with a slightly larger group of U.S. telephone survey sampling experts attending.1  At that second summit it was decided that the next   meeting to address cell phone surveying in the U.S. should be open to all interested survey researchers. This was further discussed at the January 2006 Telephone Survey Methods II conference in Miami and planning for the open meeting ensued shortly thereafter. What resulted was a three-day “mini-conference” within the larger 2007 AAPOR conference in Anaheim, CA.2  This included a half-day short course on cell phone surveys on Wednesday of AAPOR, followed by seven consecutive paper and discussion sessions on Thursday and Friday. All of these meetings were extremely well attended. In addition, AAPOR Council approved the creation of a special issue of Public Opinion Quarterly (Volume 71, Number 5, 2007: Cell Phone Numbers and Telephone Surveying in the U.S.), that was published in December 2007.3  Many of the members of the Task Force helped to conduct blind reviews of articles submitted to the special issue and/or contributed to the articles published in the special issue.

1 See http://www.nielsenmedia.com/cellphonesummit/cellphone.html.
2 Considerable appreciation goes to Patricia Moy, Rob Daves, and Frank Newport for their key support of this mini- conference as members of AAPOR Council and leaders of the 2007 AAPOR conference program.
3 Considerable appreciation goes to Peter V. Miller, editor of Public Opinion Quarterly, for his consistent and crucial support in seeking approval of this special issue from AAPOR Council.
In approaching the charge given to it by AAPOR’s Executive Council, the Task Force decided that it was premature to try to establish “standards” on the issues as it is too soon in the history of surveying respondents in the U.S. reached via cell phone numbers to know with confidence what should and should not be regarded as a “Best Practice.”  Nonetheless, a great deal has been learned during the past five years by those thinking about and conducting such surveys in the U.S. and the Task Force agreed fully that it is time for AAPOR to release information such as that contained in this report that identifies a wide range of “guidelines” and “considerations” about cell phone surveying in the U.S.

As part of the process of creating this report, the Task Force met several times via telephone conference calls from June 2007 through January 2008 and established five working committees to address each of the following interrelated subject areas:

1. Coverage and Sampling (L.Piekarski,Chair)
2. Nonresponse (A.Fleeman,Chair)
3. Legal and Ethical Issues (H.Fienberg,Chair)
4. Measurement (S.Keeter,Chair)
5. Weighting (J.Hall,Chair)
Each of the subcommittees created a first draft of their section which was vetted by a meeting of the full Task Force in August 2007. Those sections were further revised and were reviewed and then discussed by AAPOR Council at their September 7, 2007 meeting in Washington DC with Lavrakas and three TF members present who also were current AAPOR Council members (Brick, Keeter, and Steeh). Feedback was given by Council to the Task Force and the subcommittees worked in the fall to revise their respective sections to address Council’s feedback requesting more details being incorporated in the sections of the report. A third version of the report was vetted in a conference call of the full TF on December 20, 2007.
Additional revisions were identified and assigned to TF members. A fourth version was vetted by the TF members in January 2008 and this final version of the report reflects that input.

Cell phone numbers can enter into telephone samples in several different ways. If the sample is selected from a list, such as members of organizations, or from telephone numbers matched to postal addresses, a researcher may not know whether the number belongs to a cell or a landline phone. Thus list telephone samples, including those developed from address- based lists, most likely will be a mix of cell phone and landline numbers. In these cases, the inclusion of cell phone numbers has relatively little effect on the sampling process.4  However, when the method for selecting a telephone sample is Random Digit Dial (RDD), multiple dilemmas face the researcher whether the sample includes only cell phone numbers, only landline numbers, or both. This report addresses these dilemmas and focuses primarily on telephone surveys using RDD samples.

4 In the U.S., all list samples for telephone surveys should be cleaned against cell phone and ported number databases or the researcher may inadvertently violate federal regulations if using an autodialer whenever prior consent to call a cell phone number has not been given by the cell phone owner.


U.S. RDD Cell Phone Frames and Types of Telephone Service
Frames for generating Random Digit Dial (RDD) samples for conducting surveys of cell phones in the United States are available from most sample suppliers. These frames are lists of all possible wireless telephone numbers and are generally built using industry databases that identify the types of service provided by individual prefixes and 1000-blocks.5

5 Telephone numbers in the United States are comprised of 10 digits (123-456-7890). The first three numbers, 123, are the area code. The next three numbers, 456, are the prefix or exchange. The last four numbers, 7890, are the local number which can be divided into segments. A thousand block is comprised of 1,000 consecutive numbers starting with the same digit, e.g., starting with 7 (7000-7999). A hundred block is the 100 consecutive numbers starting with the same two digits, e.g., starting with 78 (7800-7899).

Three important features of the U.S. RDD cell phone frames are:
1. The data in the frames are administrative and subject to errors (e.g., a number may not be for a cell phone).
2. There are no data on the frame, or in any other file, that accurately identify whether the number is currently working, where the user/owner of the number currently resides, or if the number belongs to a person who lives in a household with a landline.
3. There are no commercially available files with detailed information such as name, address, ZIP code, or demographic information that can be accurately linked to cell phone numbers on the frame.

The administrative frame provides a variety of information about prefixes and 1000-blocks such as service provider, rate center location, some rate information, and type of service provided.

Below is a list of the types of service that might contain cell phone numbers:
04 = Dedicated to Cellular
50 = Shared between three or more types of service - (Plain Old Telephone Service (POTS), Cellular, Paging, Mobile, or miscellaneous)
54 = Shared between POTS and Cellular 55 = Special Billing Option – Cellular
58 = Special Billing Option shared between two or more - (Cellular, Paging, Mobile)
60 = Service provider requests SELECTIVE Local Exchange – (IntraLATA Special Billing Option – Cellular)
65 = Personal Communications Services (i.e. Sprint)
66 = Shared between POTS and Personal Communications Services 67 = Special Billing Option - PCS / Personal Communications Services
68 = Service provider requests SELECTIVE Local Exchange – (IntraLATA Special Billing – PCS)
Purchasing a U.S. RDD Cell Phone Sample
When purchasing a sample of U.S. RDD cell phone numbers, researchers are encouraged to determine how the sample provider’s frame has been constructed and approximately what percentage of cell phone numbers in the geographical area to be surveyed are included/excluded by this frame.

The following are issues that researchers are encouraged to consider as they think about the RDD cell phone sample they will purchase:
  • Is the frame based on prefixes, 1000-blocks, or 100-blocks? 
  • What types of wireless services are included: Dedicated? Shared? Cell? PCS? Special Billing?
  • What is the extent of non-coverage and the extent of overlap between the provider’s landline frames and cell frames? What prefixes, 1000-blocks, or 100-blocks are excluded; and why? What prefixes, 1000-blocks, or 100-blocks are duplicated; and why?
  • How are “shared” service numbers handled? Shared prefixes and shared 1000- blocks are those in which different types of service may be mixed at a lower level, such as within 100-blocks. This means that wireless numbers can exist together with landline numbers within a single prefix or 1000-block.
  • What levels of geography are available for sample selection and how have they been determined? County level assignments are generally based on rate center location information, for prefixes or 1000-blocks, as provided by the service providers on administrative databases. Therefore most county based geographies such as State, MSA, and DMA will be available, but sub-county geographies such as ZIP code will not be available.
  • What alternatives are offered for including landline numbers ported to wireless service? Landline numbers ported to wireless service may be included in a landline frame but will not be included in a cell phone frame. Although the wireless number associated with that port is in the wireless frame, it will not connect if dialed directly.
  • It is important to note that NeuStar's Intermodal Ported TN Identification Service license6 limits use of their data to “scrubbing”. In other words, it may only be used by a licensee in their efforts to comply with TCPA regulations prohibiting calls to cell phones using automated telephone equipment and may not be used to construct acell phone frame. (See section on Legal and Ethical Considerations for further details.)
6 This database is licensed for the “sole purposes of: (1) avoid engaging in TCPA Prohibited Conduct by verifying whether TNs [telephone numbers] are assigned to a paging service, cellular telephone service, specialized mobile radio service, or other radio common carrier service, or any service for which the called party is charged for the call; (2) disclosing, selling, assigning, leasing or otherwise providing the TN Ports to a third party that itself qualifies as a "Customer" under an Intermodal Ported TN Identification Services Agreement for the sole purpose of avoiding TCPA Prohibited Conduct by verifying whether TNs are assigned to a paging service, cellular telephone service, specialized mobile radio service, or other radio common carrier service, or any service for which the called party is charged for the call”.

Evaluating the Adequacy of the Coverage Provided by U.S. RDD Cell Phone Samples
Currently available frames of cell phone numbers in the U.S. can provide excellent national coverage of the U.S. cell phone population. However, many coverage issues need to be considered when designing a cell phone sample, particularly one that is not national in scope. Because of this, it is even more important in cell phone surveys than in landline surveys to determine the residence of each respondent during the interview if this data element is needed for determining survey eligibility or in weighting.

The following are additional coverage considerations for the prudent researcher: 
  • Most wireless service areas or exchange boundaries are significantly larger than their landline equivalents. This means that it is common to live in a different county than the county in which the cell phone exchange rate center is located. Some research suggests that one-third of cell phone only subscribers do not live in the county associated with their rate center. As a result, defining sample geography by county can result in coverage error when:
    • There are no rate centers located within one or more of the counties to be sampled.
    • An unknown number of subscribers live in a neighboring county that is not included in the sample geography.
    • An unknown number of subscribers live in a county to be sampled, but have telephone numbers in a rate center located in a county that is not being sampled.
    • Different providers have different coverage areas for their service. This means that the geography covered by Provider A in a given community may be smaller or larger than the geography covered by Provider B in the same community.
  • The exchange (prefix) associated with a wireless telephone number represents the original point-of purchase, where the subscriber lived when they originally purchased their service, and may not represent where the subscriber currently lives. Research by Arbitron suggests that in 2005 approximately one-sixth of cell phone only subscribers lived outside the metropolitan area associated with their rate center. This can result in different types of coverage error because:
  • Some subscribers have a phone number provided by an employer or associated with the subscriber’s place of business and not associated with the location of his residence.
  • Subscribers can move to a different city or state and keep their telephone number. This can create two types of problems.
    • First, a subscriber can be included in the geography associated with her/his telephone number but in which s/he no longer lives. (This problem can be handled by asking the respondent where s/he lives to determine eligibility.)
    • Second, this same subscriber is not covered in samples of telephone numbers from the geographic area where s/he currently resides.
Integrating a sample from an RDD cell phone frame with a sample from an RDD landline frame may be accomplished in several ways, and researchers should fully disclose the methods used. To produce representative estimates, surveys should collect sufficient information from respondents to apply weights that represent the probability of selection of each household and/or respondent. (See section on Weighting Considerations for further details.)

Households That Can Be Reached by Both the Cell Frame and Landline Frame
Two different sampling approaches have been used to handle situations in which a household in the U.S. can be reached by both a landline and a cell phone. There is, however, no consensus yet on which approach is the “best” design.

Nonscreening approach. This approach involves conducting the interview regardless of the frame from which the household was sampled (i.e., no households are excluded based on their type of telephone access). For these persons who can be interviewed by either landline or cell phone, the methods for determining the probability of selection of each household and/or respondent should be clearly specified in the survey documentation.

Screening approach. This approach involves conducting the interview only with people sampled via the cell phone frame who do not have a landline, thus excluding numbers from the cell phone sample in the overlap (i.e., screening out those persons with both a cell phone and a landline). In this alternative sampling design, persons who have at least one household landline telephone and use at least one cell phone would be eligible for inclusion only from the landline frame. Those interviewed via cell phone would be limited to cell phone only persons/households, i.e., ones without a residential landline.

Both of these approaches have been used and each has its own set of issues that have not been fully resolved. For example, some proportion of persons who have a landline and a cell phone always, or almost always, answer their cell phone and never, or almost never, answer their landline telephone. These persons are excluded in the sampling design with screening, but very rarely respond if sampled from the landline frame. Similarly, some proportion of persons who have a landline and a cell always, or almost always, answer their landline phone and never, or almost never, turn on their cell telephone except to make an out-going call; (e.g., adults who use their cell phone only for emergencies or only for other out-outbound calling). These persons may be overestimated in the nonscreening sampling design because they rarely or never answer their cell phone number.7

7 Unfortunately, at present the percentage of U.S. residents who engage in these behaviors in unknown, as no reliable national surveys have been reported that measure these approaches to using one’s landline and cell phone.

Therefore, regardless of the sampling approach used, it is recommended that researchers should obtain enough cell phone cases, or cell phone only cases, so that it is not necessary to apply large weights to the cell only cases.
Within Household Coverage Issues in U.S. RDD Cell Phone Samples
Cell phone samples also create sampling issues related to within-household coverage that have not been adequately addressed at this time. These include the following:
  • To what extent are cell phones shared by more than one person in a household and how should these cell phone numbers be handled?8
  • How should a cell phone belonging to a non-adult (minor) be treated? Should it be considered out-of-scope or treated in a manner similar to a landline that is answered by a minor?
  • Do researchers need to know, and thus to ask, if the cell phone is a business phone or used as a combination business/personal phone? And, under what circumstances should these conditions cause the number to be out-of-scope?
8 Unfortunately, at present the percentage of Americans who engage in these behaviors in unknown, as no reliable national surveys have been reported that measure cell phone sharing.

Several different methods of selecting a respondent in cell phone surveys are possible, and include:
1. Select the person who answers the phone, with no screening for others who possibly use the phone.
2. Select the person who is the “primary user” of the phone, with screening for the primary user.
3. Randomly, or at least systematically, select a respondent from among all the “eligible” users (e.g., all adults) of the cell phone, after screening for how many eligible people use the phone and making sure that all these people do in fact qualify as being “eligible” to be surveyed.
4. Randomly, or at least systematically, select from among all eligible persons in a household if this is a household study, regardless of whether all these people use the cell phone on other occasions, with screening for the number of eligible household members.

Future research will be needed to determine whether Methods 2 and 3 (above) are superior to Method 1. Method 4 would be chosen only when respondents represent households rather than just themselves. Method 4 might also be appropriate when both cell phone and landline numbers are included in the same survey.
There is an additional consideration related to within-household selection. In cell phone surveys, the person who answers the phone will turn out to be the respondent the majority of the time. If the introduction of a cell phone survey is of shorter duration than is common in a landline survey, the interviewer will be able to start the substance of the questionnaire more quickly.   This may help reduce refusals that otherwise might occur because of the time it takes after  initial contact to start the interview. As a consequence, any procedures to improve within- household coverage, by not always interviewing the person who initially answers the cell phone, likely will increase the refusal rate in cell phone surveys.


One of the most problematic features of general population RDD cell phone surveys in the U.S. is their low response rates – consistently below 30 percent and overall about 10 percentage points less than in current landline surveys. Three distinct dimensions of this problem are addressed in this section:
1. The sources of nonresponse in RDD cell phone surveys.
2. What current research suggests may be effective operational strategies to reduce it.
3. How the AAPOR telephone response rate formulae can be modified to account for the unique features of cell phone interviewing.
Sources of Nonresponse
The reasons for low response rates in RDD cell phone surveys involve the same components that account for nonresponse in RDD landline surveys -- noncontact, refusals, other noninterviews, and undetermined eligibility. However, these components play different roles  and have different impacts on overall nonresponse in an RDD cell phone survey compared to   an RDD landline survey.

When considering only noncontacted numbers confirmed as working (i.e., ones that ring but never have been answered by an actual person), the empirical evidence to date suggests that they make up approximately the same proportion of final dispositions in both RDD landline and RDD cell phone surveys provided the number of call attempts is sufficiently large (> 5). The tendency for cell phone owners to constantly carry their cell phones with them means that they are potentially accessible to interviewers in a much wider variety of settings and for a greater part of their waking hours than in a landline survey. As more and more people think of the cell phone as their primary phone, the noncontact component of nonresponse in RDD cell phone surveying is expected to decrease. For the same reason, however, the noncontact component may well increase in RDD landline surveys.

Refusals are a main source of nonresponse in RDD cell phone as they are in RDD landline surveys, especially those survey that use many callbacks. However, in several mode comparison studies conducted between 2003 and 2006, the refusal rate in the RDD cell phone survey exceeded the rate in the comparable RDD landline survey by five to 20 percentage points.

Given the structure of the telephone system in the U.S., it is easy to understand why refusals are more numerous when the mode of contact is a cell phone.
Reasons for this include:
1. Many owners appear to think of the cell phone as a private and personal form of communication and have tended to share their cell number only with family and close friends, if at all. There is empirical evidence (as well as myriad anecdotal experience) that supports this assertion. In response to an open-ended question about whether they would mind being called on their cell phones by a research organization, respondents in a 2003 survey frequently mentioned invasion of privacy as a primary reason for being opposed. As one respondent replied, “my [cell] phone is for ‘personal use,’ not for annoying people to call me on.” Or as another respondent said, “It's private. I don't give the number to anybody.” Thus the response to an interviewer’s cold call may mix surprise with hostility and thus lead to an immediate, flat-out refusal. However, it is expected that the more often people use their cellular phones, the less likely they will hold this attitude. Although these respondents’ illustrative comments were made five years ago, U.S. cellular numbers still are not listed in any public directory, attesting to the pervasiveness of the feeling that the cell phone provides “private” communication.
2. The called party is charged for a cell phone call. Even though U.S. service providers now offer a number of different calling plans that provide varying levels of “free” minutes, many potential cell phone respondents will incur costs and so will be likely to refuse immediately. This problem is exacerbated by not being able to send advance mailings with some form of remuneration to the owners of sampled RDD cell phone numbers prior to calling them in order to “warm them up,” as can be done with RDD landline numbers that are matched to household addresses.
3. The variety of settings in which a cell phone owner might receive a call also generates refusals. If a potential respondent is in a restaurant, driving a car, or engaged in any activity not conducive to a telephone conversation with a stranger, the request for an interview might be met with a hasty “No” and a hang-up, before the interviewer even has a chance to mention rescheduling the call.
4. It is more difficult to convert refusals when the mode of administration is a cell phone than with a landline telephone. This is because interviewers trying to convert initial refusals are likely to reach the same person over and over again, rather than someone else within the household who may be more willing to participate (as often happens when the mode is a landline telephone).

Taking all of these factors into account, one is led to the conclusion that high refusal rates – and thus low response rates -- will plague RDD surveys of persons in the U.S. contacted via their cell phone for the foreseeable future. However, as the cell phone grows in popularity, the reaction that an interviewer has invaded users’ privacy is anticipated to become less intense. Although U.S. service providers have begun to reduce the costs associated with receiving calls on a cellular phone, movement to date in that direction has been slow. Were this rate of change to accelerate, this cause of refusals should lessen. In addition, survey methodologists should be able to devise more effective refusal avoidance strategies that are better targeted at sampled cell phones owners, thereby better preventing refusals in the first moments of contact.
Nevertheless, although some of the conditions that foster high refusal rates in cell phone surveys may ameliorate over time, refusals are expected to remain substantial because RDD cell phone surveys are exposed not only to unique circumstances, but also to the same influences that are driving down response rates in standard RDD landline surveys.

Other Noninterviews
This component of nonresponse consists of two types of failure to achieve participation:
1. The intended respondent cannot physically or mentally participate in an interview, speaks a different language from the interviewer, and/or will not be available throughout the survey field period.
2. The intended respondent has other reasons for not being able to participate at the specific time(s) when interviewers call.

Undetermined Eligibility
In RDD landline telephone surveys, a proportion of the selected sample ends in an ambiguous region between definitely working and definitely not working, and if it is working, uncertainty often remains as to whether it is a residential number. At the end of the survey’s field period these numbers are given a final status of “unknown” or “undetermined” eligibility.
The size of this Undetermined Eligibility component is likely to be much larger in an RDD cell phone survey than in an RDD landline survey because cell phone numbers are not as easily identified as being definitely ineligible for the following reasons:
1. Many business cellular numbers also are used for personal communication. As a result, fewer cell numbers can be classified as ineligible because they are used solely for business or commercial purposes. As such, distinguishing between residential and commercial numbers is more problematic and less reliable in RDD cell phone surveys, and there is no easy method for determining whether a given cell number has reached an eligible respondent on a “personal” cell phone.
2. Many owners do not use their cell phones on a regular basis. They may turn them on only in emergencies or only when they want to make another outbound call. This practice leaves the working status of a great many RDD cell phone numbers in doubt at the end of the field period.
3. The turnover of cell numbers – what the telecommunications industry refers to as “churn” – is greater for cell phones than for landlines. This increases the chances that a sample number will not be properly classified by the service provider as nonworking.
4. The plethora of operator messages in the U.S., which differ by provider, often are unclear and confusing. As a result, it often is very difficult for an interviewer to decide whether, in fact, the number is truly ineligible.  Table 1 presents some examples of common operator messages that are highly ambiguous as to whether a cell phone number is working or not working.
Table 1. Examples of Ambiguous Cell Phone Operator Messages
Please hold until the Nextel client you are trying to reach is located.
The number or code you dialed is incorrect. Please check the number or code and try again.
The cellular phone you have called is turned off or out of the service area; please try your call again.
This number is not accepting calls at this time.
Future developments may help to reduce the unknown eligibility problem in RDD cell phone surveys. The industry is consolidating, so with fewer U.S. companies the jumble of operator messages eventually should result in clearer and more standardized wording. Most encouraging, is the expectation that the sporadic use of cell phones will decline as more individuals come to rely on the technology. With the passing of the current generation of oldest adults in the coming decades, it is anticipated that a greater proportion of those with cell phones will leave them on most of the time. For these reasons, the component of nonresponse due to an inability to determine a number’s working status is expected to decline in importance and have less impact on an RDD cell phone survey’s overall response rate.

Differential Nonresponse among Cell Phone Only Respondents
A final aspect for understanding nonresponse in cell phone samples concerns a form of differential nonresponse that occurs within cell phone surveys. In 2007, the proportion of completed interviews with cell phone only respondents in U.S. national cell phone surveys tended to be at least twice as high as the estimated percentage of cell phone only adults within the general adult population. This suggests that (a) contact rates are substantially higher for
cell phone only respondents compared to cell phone respondents with landline lines and/or (b) refusal rates are substantially lower for cell phone only respondents than for cell phone respondents with landline lines.
This has important implications for both the efficiency of sampling cell phone only adults in cell phone samples as well as the representativeness of unweighted samples from cell phone surveys.

Operational/Logistical Considerations Related to Unit- and Item-Nonresponse in Cell Phone Surveying
Contacting and interviewing respondents via their cell phone involves a number of operational considerations which differ from contacting and interviewing a person over a landline. These considerations apply regardless of whether an RDD frame is used or a list frame.     These include place-shifting (respondents are no longer reached only in their homes, but       may be located outside of the home as well) and associated concerns for respondent safety; increased respondent burden (particularly when respondents incur a financial cost for responding by cell phone); and the potential for behavioral differences in terms of how respondents use their cell phones and respond to survey requests and questions over these devices.
While research in these and related areas is on-going, the body of empirical evidence from which to craft optimal procedures for reducing nonresponse in cell phone surveys is thin; i.e., much remains unknown. With this in mind, the operational considerations and procedures outlined below are meant to serve as guideposts until more definitive findings in these areas become available.
Respondent Safety
Because of the mobile nature of cell phones, a cell phone respondent may put herself/himself at risk when speaking to a survey interviewer in ways that do not occur in landline surveys.  As such, researchers should consider having their interviewers trained to be on alert to readily mention the possibility of calling back at another (safer) time, possibly even on a landline telephone. A separate temporary disposition code might be used for these instances to allow sample managers the ability to better track and address these cases. (More discussion of respondent safety appears in the section on Legal and Ethical Issues.)

Data Quality and Item Nonresponse
Many users of cell phones appear very willing to talk in all kinds of locations, including public and semi-private places, in which they are seemingly oblivious of those around them. Nevertheless, compared to a survey respondent reached on a landline, a respondent reached on a cell phone may be more likely to consciously or unconsciously limit the candor/openness, and thus the completeness and accuracy, of her/his responses depending on the sensitivity of the research questions (e.g., health, finances, crime, and other sensitive topics; income, age, and other demographic data; etc.). As such, whenever it is appropriate and based on the nature of the topics being surveyed, researchers should consider having interviewers determine whether the respondent on a cell phone is in an environment that is conducive to providing full and accurate answers to all of the questions being asked. If this is not the case, interviewers should be trained to know when to schedule a callback.

To further promote candor, reduce item nonresponse, and avoid disclosure of sensitive information, questionnaires for cell phone surveys should be carefully evaluated so that even if the question wording is sensitive the response categories may be able to be designed to protect the privacy of the information from someone who might overhear them being spoken by a cell phone respondent.
Questionnaire Length
Although many of the current RDD cell phone surveys in the U.S. appear to have been planned with questionnaires that take an average of approximately 10 minutes to complete, interviews in a large statewide survey on health behaviors took an average of 30 minutes to complete. Thus, to date, there is no evidence that cell phone interviews always have to be extremely short (e.g., five or fewer minutes), nor for that matter do they always need to be shorter than their landline counterparts in surveys that include both those reached via a landline and those reached via a cell phone.

However, because people speaking on their cell phone more often are under special time constraints than when speaking on a landline, survey researchers should take this into account when planning a cell phone questionnaire. Thus, researchers should consider explicitly whether the length of an interview that is conducted on a cell phone should be shorter in duration than one conducted on a landline.

Calling protocols
In the RDD cell phone studies that have been conducted to date, exceptionally long field periods and a large numbers of call attempts were found to increase the contact rates and lessen the unknown eligibility rates. In RDD cell phone surveys with very short interviewing periods and a limited number of call attempts, noncontact rates and unknown eligibility rates have been found to be much greater. For example, in one experiment, when the number of call attempts was limited to four, the noncontact rate approximately doubled (Steeh, Buskirk, and Callegaro, 2007).

Although higher response rates may be achievable by making additional callbacks to cell phone respondents, the personal nature of the cell phone suggests the need for caution in this area. To reduce the potential for overburdening the cell phone respondent pool, it is
recommended that the total number of call attempts be limited to a modest number (in the range of six to 10 call attempts) in comparison to the greater number of attempts often used when surveying landline telephone numbers.

As with landline surveys, calls should be attempted at different days of the week and times of the day. Cell phone surveys carried out to date have used different calling protocols. These have included calling mainly during early evenings and on weekends when most users have free service; but some studies suggest that contact and cooperation may not be that different across the different day-parts. More research is required, however, to determine the optimal calling pattern across different days and time slots.

Furthermore, because cell phones can be in use in geopolitical areas other than where the cell phone’s area code is located (e.g., a respondent is away on business or vacation, or simply has moved to another location), calling windows may need to be modified to reduce the chances of reaching a respondent who has moved to, or is currently in, a different time zone at a time considered too early or too late for calling there. At present, it is unclear what percentage of the U.S. cell phone population may be affected by this consideration, but it is likely to grow over time.

Compared to the standard protocol of allowing a landline number to ring at least six times before coding it a “Ring No Answer (RNA),” interviewers should allow the cell phone to ring a minimum of eight times before assigning an RNA or other disposition code for that contact attempt. It often takes as many as eight rings before a cell phone’s voice mail starts up, so not waiting for the extra rings would adversely affect the performance of the sample whether in quantifying true RNAs or in the ability to leave voicemail messages.

Initial research has shown that in the event of a missed call, cell phone users are more likely than landline users to attempt to recontact the number that appears on their Caller ID. As a result, researchers should consider the need for the telephony infrastructure of the survey research calling center to handle such inbound calls. Ideally, the phone number that displays on the cell phone’s Caller ID should be able to be redialed by the respondent and reach an in- bound line on which an interview can be conducted. In turn, calling centers should be prepared and able to schedule a future date and time to call back a respondent who has called into the call center if it is not possible to conduct the interview at the time of the inbound call from the respondent. Further, at the respondent’s request, interviewers need to be able to enter an alternative telephone number (typically a landline number) into the CATI system at which to attempt the recontact.

Voicemail Messages
Leaving a voicemail message on the first call attempt to a cell phone can act as the important pre-alert of the survey request. This may be particularly important given that addresses are not currently available for matching to U.S. cell phone numbers, thereby precluding the use of advance contact letters by mail that is possible with landline surveying. In addition, researchers should decide whether interviewers should leave a callback number in this message because the outbound number that appears on the cell phone’s Caller ID may not be valid for an inbound callback.

Text Messaging
In theory, an advance text message to a cell phone number might serve the same purpose as an advance letter mailed to a landline respondent. However, legal barriers currently exist in the U.S. to sending unsolicited text messages. Before the new U.S. laws were enacted, some researchers incorporated advance text messaging into their survey designs. Although the results suggested that sending a text message did not increase cooperation rates, knowing whether the message was actually delivered to a cell phone helped to reduce the cases of unknown eligibility. If the legal landscape in the U.S. happens to change, advance text messaging may become a viable medium to increase contact and response rates in cell phone surveys.

Determining Geographic and Other Types of Eligibility
Geographic screening. Researchers need to consider the geographic implications of reaching a cell phone user in light of the target population that the survey is meant to represent. Geographic screening of those reached on their cell phone appears to be necessary in all cell phone surveys that are not national in scope.

Geographic screening often is not easy to carry out accurately. If it is not well crafted by the researchers and well implemented by the interviewers there will be many Errors of Omission – false negatives in which someone is incorrectly screened out when they are in fact geographically eligible – and Errors of Commission – false positives in which someone is incorrectly screened in when in fact they are geographically ineligible. Furthermore, screening may add to the number of refusals that occur during the survey introduction when such screening is likely to be carried out..

Personal and/or business cell phones. Clear and consistent rules for interviewers to use to determine when a number should be assigned a disposition of “business phone” should be established. Many respondents using a company-provided cell phone typically use the phone to take both business-related and personal telephone calls (some employers allow this, whereas others do not) and typically do not volunteer that the phone is used for business purposes unless they are probed or asked directly. An a priori decision about whether such numbers should be considered as being eligible in a survey needs to be made by the researchers at the time of planning the survey, and rules should be established for use by interviewers to accurately determine whether these numbers are eligible or ineligible.

Group quarters. Survey designers need to consider whether other persons living in group quarters (e.g., dormitories, military barracks, etc.) along with a sampled cell phone owner should be made eligible sample members for studies where such respondents have traditionally been included.

Minors. Due to the number of persons under the age of majority (i.e., 18 years old in most cases, although some states set the age at 19 or 21) who use cell phones, researchers need to establish minimum age requirements for the survey, including who can serve as a household informant during the survey introduction. Scripts and disposition codes must be devised for interviewers to use whenever a person under the age of majority is reached.

Because of the cost structure of cell phone billing currently in the United States, there often may be a financial burden upon the respondent for an incoming research call – one that does not occur with a landline phone. Therefore, when appropriate, it is recommended that researchers offer some form of remuneration to eliminate this cost burden to the respondent. The remuneration should be based on (a) the average charge for “out of plan” minutes across cell phone service providers and (b) the length of the interview. Decisions about remuneration should be separate from decisions about the possible use of incentives.

Although remuneration is a sine qua non for surveys that call cell phones, researchers must keep in mind that incentives (monetary and non-monetary, as appropriate) may also be needed, as with many landline surveys. And, the type of incentives used to raise the response propensity of those sampled on their cell phone may need to be different from those that have been tested and used in landline surveys.

Refusal Conversions
As noted above, refusal rates tend to be higher among respondents reached by cell phone than among those reached by landline telephone surveys. As such, it is recommended that until definitive research has been conducted, refusal conversion attempts be of a limited nature to reduce the potential for further agitating cell phone respondents. This is in large part a result of reaching again the same respondent who previously refused and not some other member of the sampling unit (household), as often is the case when trying to convert refusals in landline surveys.

Interviewer training
Interviewing respondents by cell phone is a more complex task than is interviewing a respondent on a landline. As noted, the calling protocols, case dispositioning, eligibility requirements, and interviewing techniques may in many instances be quite different. Therefore, researchers should ensure that interviewers are properly trained to handle these interviewing requirements and have the tools (e.g., scripts, persuaders, and other protocols) at hand to conduct a high quality interview when reaching a respondent on their cell phone.

Considerations in Calculating Response Rates in U.S. Cell Phone Surveying

Disposition Codes Used in Cell Phone Surveys

In general, the formulae for calculating landline response rates published in the AAPOR Standard Definitions can be adapted to cell phone surveys. The differences lie primarily in the interim (temporary) codes assigned prior to the final disposition and are due to the nature of the call to a cell phone. New outcomes are possible and current AAPOR codes sometimes change in meaning or prevalence, whereas some old codes can be eliminated altogether. In calls to cell phones, new interim codes are possible such as “respondent not reachable at this moment” or when a “network busy” message is encountered. Eventually when the field period is closed, codes such as these that are still in their interim status must be classified into a final disposition status.

New disposition codes are required for situations that do not arise in landline surveys. For example, unlike a landline, a cell phone can be in a geographic area or other location (e.g., inside a tunnel) without service coverage. Cell phones also are switched off more often than landline ringers are turned off in a household. Usually in these cases an operator message or a voice mail message will allow interviewers to classify the outcome into new codes that account for these circumstances. Another situation that does not arise in landline surveys stems from the fact that users may be doing almost anything when they answer their cell phone — driving a car, in a restroom, flying in a helicopter, at a basketball game, eating in a noisy restaurant, etc. The use of a new interim disposition code that identifies these outcomes would help researchers determine how large an effect this “temporary unavailability” has on the cell phone survey process.

When calling cell phones, some landline disposition codes may have different or expanded meanings. The “breakoff” code in landline surveys is a case in point. It indicates that the landline respondent herself/himself actively has terminated the interview prematurely. In a cell phone survey, on the other hand, a “breakoff” may also occur as the result of a dropped call or other technical problems and may have nothing to do with the respondent actively deciding to break off from the interview. These kinds of new meanings should be recognized as new interim disposition codes in cell phone surveys that need different handling than traditional breakoff refusals in landline surveying.

The Household Level Refusal code (i.e., a refusal that occurs before a designated respondent has been selected) is an example of a code that has a different prevalence in cell phone surveys. Due to the personal nature of the cell phone, a refusal by somebody other than the designated respondent is much less likely to occur in cell phone survey than in landline surveys. Other examples of codes that arise less often in cell phone surveys include fax machine and busy signals.

The codes in landline surveys that often are not applicable in a cell phone survey include other household level codes, such as group quarters and household level language problems.

Calculation of Cell Phone Survey Response Rates
Two issues affect the calculation of response rates in cell phone surveys that do not arise in landline surveys.

First, operator messages tend to be less standardized for cell phones than for landline telephones and vary by company. Some of them are ambiguous or unclear, and thus open to differing interpretations (as shown in Table 1). If this problem is not resolved, the number of unknown eligible cases will remain higher than in a comparable landline survey.

Second, at present it is unclear how the Unknown Eligibility category of nonresponse should be adjusted in Formulae 3 and 4 of the AAPOR Standard Definitions when the mode of administration is a cell phone. How e (the estimated proportion of cases of unknown eligibility that are treated as eligible) is defined will have important effects upon response rates. Until this dilemma is resolved, it is recommended that researchers use AAPOR Formulae 1 and 2 for cell phone surveys.

Because of these and other differences between outcome dispositions in landline and cell phone surveying, considerable caution is encouraged when comparing response rates for landline samples versus cell phone samples.

Combining Cell Phone and Landline Samples into One Response Rate.
To date, RDD surveys in the U.S. using the dual frame sample design have reported response and nonresponse rates separately for each frame. At present, it is not clear that there is a viable alternative.
Clearly, reporting a common response rate is necessary when the sample design uses the cellular frame as a supplement to a landline survey and screens for people living in households without a traditional telephone. However, none of the current formulae in AAPOR’s current Standard Definitions indicates how such a response rate should be calculated.

Considerations Regarding Legal and Ethical Issues in Cell Phone Surveys 9

Legal Restrictions on Calling Cell Phones – the TCPA
Under the federal Telemarketing Consumer Protection Act of 1991 (TCPA, 47 U.S.C. 227), which is enforced by the U.S. Federal Communications Commission (FCC), automatic telephone dialing systems cannot be used to contact a cell phone without the user's “prior express consent” – a content-neutral requirement that applies to all calls, including survey research calls.10  The TCPA defines “automatic telephone dialing system” as equipment that has the capacity to store or produce telephone numbers to be called using a random or sequential number generator, in conjunction with dialing such numbers. As clarified by the FCC’s 2003 report, this includes all forms of auto-dialers and predictive dialers, and applies to intra-state calls, interstate calls, and calls from outside the United States.11

9 Disclaimer: The information provided in this section is for guidance and informational purposes only. It is not intended to be a substitute for legal advice.
10 See http://www.fcc.gov/cgb/policy/telemarketing.html and http://www.law.cornell.edu/uscode/47/227.html
11 See http://hraunfoss.fcc.gov/edocs_public/attachmatch/FCC-03-153A1.pdf

To ensure compliance with this federal law, in the absence of express prior consent from a sampled respondent, telephone call centers should have their interviewers manually dial cell phone numbers (i.e., where a human being physically touches the numerals on the phone to dial the number). Of note, there is also no “good faith exception” for inadvertent or accidental calls  to cell phones, so not knowing that a cell phone number is being dialed (as happens in RDD landline samples that unknowingly reach cell phones) is not an acceptable excuse for violating the federal regulations.

Although telephone sample providers have been made aware of this law, and NeuStar provides a useful service for recognizing cell phone numbers that have been “ported” from residential lines, their methods may not be a perfect solution to the problem. At the present time, CMOR working for the benefit of research industries to amend the TCPA to exempt research calls.12  However, in the meantime, research call centers should only use manual dialing to reach cell phone numbers unless expressed prior consent has been received from the respondent that it is permissible to call her/him. This could occur, for example, if a respondent is first contacted on a cell phone that was hand-dialed by an interviewer, and agrees to the scheduling of a callback to that number.

12 www.cmor.org

Legal Considerations Regarding Text Messaging and Spam
The TCPA restrictions on using an automatic telephone dialing system to call a cell phone could apply to the sending of text messages as well as regular phone calls and several appeals court cases have recently left the TCPA’s application unclear. In addition, researchers that send text messages to cell phones in compliance with the TCPA (either manually or with express prior consent) could find their messages subject to the CAN-SPAM Act (16 CFR Part 316), which regulates commercial email (spam). Even though legitimate survey and opinion research is not defined by the TCPA as being “commercial” in nature, researchers are encouraged to always include opt-out notices and capability in text messages, as would be required under the CAN-SPAM Act. There are also numerous state laws regulating spam, and telephone researchers should be aware of and consider the implications of those that may apply to any cell phone surveying they may be planning to conduct in particular states.

Legal and Ethical Considerations Regarding Possible Harassment Due to the Number of Callbacks Used
There are various state level harassment laws in the U.S. that need to be considered when determining the placing of callbacks to a cell phone respondent. For example, under current Utah law, it is illegal for anyone to cause a telephone to ring “repeatedly” or “continuously” (the law is not more specific). Under Missouri law, it is considered harassment for anyone to make "repeated" telephone calls; (in one case brought under the law, four call attempts to an answering machine were sufficient to constitute harassment). In Hawaii, it is illegal to repeatedly make a communication anonymously or at an extremely inconvenient hour, and in Montana one cannot use a telephone to disturb, by repeated telephone calls, the peace, quiet or right of privacy of a person. Although a matter of interpretation, multiple callback attempts to a respondent runs the risk of violating any one of these state laws.
With the advent of Caller ID, even though a cell phone respondent may not hear the phone ring all of the times a survey organization calls, the respondent often will have a record of how many times calls from a given number have been made to her/his number. Thus, in addition to being a possible ethical violation of what most might construe as “harassment,” and regardless of whether it also is a legal violation, those planning to conduct cell phone surveys need to carefully think about how multiple callbacks may affect their final response rates if they alienate cell phone owners with “too many” (e.g., > 10) and/or “too frequent” (e.g. several calls within a 24-hour period) callbacks.

Ethical Considerations and Time-of-Day Calling Restrictions
Federal law limits telemarketing calls to between the hours of 8:00 A.M. and 9:00 P.M. local time for the respondent being called. State laws can restrict those hours further, and some states have specific content-neutral time of day restrictions for the use of autodialers. Even though telephone survey researchers are not restricted by these laws, CMOR recommends that researchers abide by the applicable federal and state laws regarding time of calling for the location of the respondent being contacted.

Ethical Considerations for Taking Safety and Respondent Privacy into Account
The mobile nature of cell phone technology allows for a respondent to be engaged in numerous activities and to be physically present in various locations that would not normally be expected in reaching someone on a fixed landline number. In particular, the operation of a motor vehicle or any type of potentially harmful machinery by a respondent during a research interview presents a potential hazard to the respondent and to anyone else in the general vicinity of the respondent (e.g., fellow passengers in the car).

Any researcher who conducts a survey that reaches people on a cell phone should take appropriate measures to help protect the safety of the respondent and whoever may be nearby. For example, merely asking respondents whether they are operating a motor vehicle is insufficient because the potential risks from distraction are not limited to driving. Questions about specific activities also suggest inappropriately that the researcher is in the best position to make judgments about respondents’ safety (and to accept the consequences of an incorrect judgment). Therefore, it is suggested that researchers leave the responsibility for determining safety to the respondents themselves and encourage respondents to consider their own safety by asking about it directly (e.g., “Are you in a place where you can safely talk on the phone and answer my questions?”). If respondents indicate that they cannot safely talk, contact should be quickly ended. Interviewers should not extend the contact at that time by attempting to schedule an appointment for a callback.

Some survey respondents reached on cell phones may be seemingly oblivious of other persons in their vicinity who may be listening (willingly or unwillingly) to their conversation. As such, respondents in public or semi-private places should not be required to verbalize responses that could (a) reasonably place them at risk of criminal or civil liability, (b) be damaging to their financial standing, employability, or reputation, or (c) otherwise violate their privacy. Thus, whenever it is appropriate, and based on the nature of the topics being surveyed, researchers should have interviewers determine whether respondents on cell phones are in an environment where privacy can be maintained. Alternatively, whenever it is appropriate and based on the nature of the topics being surveyed, researchers should design their cell phone questionnaires so that answers may be provided in a non-disclosive categorical format (e.g., answering with a “A, B, C” or “1, 2, 3”) rather than voicing a more disclosive response.
It is suggested that checks on safety and privacy be made independently and at different stages of the interview. For example, questions about safety could appear early, e.g., immediately after a brief introduction. Questions about privacy could follow a description of the survey, which would permit respondents to make informed decisions about the risks of disclosure.


Cell phone surveys present special challenges not only in sampling, nonresponse, weighting, and administration, but also in measurement. The measurement challenges are twofold:
  • First, is the issue of what, if any, additional survey items are required in a cell phone questionnaire – and in a parallel landline questionnaire if samples are to be merged – to provide necessary data for weighting and other key analyses? (This weighting issue is addressed in the following Section on Weighting.)
  • Second, is the issue of how the unique nature of the cell phone affects the interaction between interviewer and respondent, and what impact, if any, this has on data quality?
As with other aspects of cell phone surveys, currently available evidence regarding both of these issues is relatively sparse and mixed.

Data Quality
There have been suggestions that data quality from cell phone interviews may be lower than among comparable landline interviews. Even though many cell phone users seem perfectly willing to carry on personal conversations in public places, it is reasonable to hypothesize that some may consciously or unconsciously limit the candor or openness, and thus the accuracy, of responses depending on the sensitivity of the research questions. An example of this would be a person on a crowded bus answering questions for a study on sexually transmitted diseases, race-related attitudes, income, or other sensitive topics, who may answer those questions differently than if s/he were in the privacy of her/his home.
Furthermore, the volume and quality of voices on cell phones may make it difficult for respondents (especially those with hearing difficulties) to clearly hear and comprehend all questions, and for interviewers to clearly hear and comprehend all answers, especially when respondents are reached in noisy locations. This also suggests that cell phone interviews may take longer than comparable landline interviews for some respondents. Alternatively, concerns about cost and inconvenience might lead some respondents to hurry through the interview, which could mean that they do not carefully consider their responses before answering.

To date, no evidence has been reported to support these presumptions. In contrast, an examination of data quality by Steeh (2005) found few differences between cell phone and landline interviews in the amount of item nonresponse, strength of theoretically meaningful correlations among items, and overall distributions when demographic differences between the samples were controlled. Thus, in this research, the data provided by respondents using cell phones did not significantly differ from those of respondents using landline phones, when comparing the same demographic groups, such as within age and race cohorts. Similarly, research conducted in last decade by Statistics Sweden (Kuusela, Callegaro, Vehovar, 2007) has not shown any significant difference in data quality when comparing interviews done on a landline to interviews done on a cell phone.

Brick et al. (2007) also found no differences in terms of missing data, in the length of open-ended responses, or in responses to four sensitive questions. Pew’s 2006 study found no significant differences between cell phone and landline interviews in interviewer assessments of whether respondents were distracted or doing other things while also responding to the interview; there also were no significant differences in levels of item non-response. Kennedy’s (2007) analysis of response order effects and straight-lining in dual frame studies conducted by Pew also found no conclusive evidence of measurement quality differences between landline and cell phone samples.
In sum, all of the findings to date regarding empirical evidence from research with cell phone respondents contradict the dire assumptions of poorer data quality. However, despite the fact that few data quality differences between cell phone and landline surveys have been noted much more research is needed on this topic. In the meantime, it is prudent for researchers to train their interviewers to be alert to whether the respondent on a cell phone is in an environment that is conducive to providing full and accurate answers to the questions the interviewer is asking.

Furthermore, as part of this cautionary approach to more fully understanding possible measurement errors in cell phone surveys, researchers are encouraged to ask each cell phone respondent whether or not s/he has been reached at home, in order to investigate data quality differences that may be associated with this in-home/out-of-home dichotomy.


Weights are very often needed in the analysis of RDD telephone surveys. Reasons for weighting include (a) the presence of differential probabilities of selection, (b) differential propensities to respond, and (c) sampling frame coverage problems among various groups in the population.

The emergence of households that have residents with cell phone service, but no landline service (i.e., the cell-only population) affects the way weights are constructed for RDD telephone surveys in the U.S. that sample cell phone numbers. In addition to surveys where the sampling frame is made up only of cell phone numbers, surveys where weighting can be affected by the cell-only phenomenon include those where: (a) the study population includes (at least in theory) the cell-only population; or (b) multiple sample frames are used to obtain coverage of the cell-only population.

In the remainder of this section guidelines for weighting are discussed. These guidelines apply specifically to surveys where:
1. The samples for the survey are selected from RDD landline sampling frames, and/or from RDD frames of cell phone numbers; and
2. The population being studied comprises households or residents of households.

Because of this, the discussion in this section raises many questions and concerns without being able to provide full and satisfactory guidance given the current limited state of knowledge.

Initial Questions about Weighting RDD Cell Phone Samples
Those planning RDD telephone surveys in the U.S. that will include cell phone numbers in the sample, as well as researchers planning to analyze data from such surveys, should ask (and answer to their own satisfaction) a number of questions, including: 
  • Are weights needed?
  • If weights are required, how should the approach to weighting differ for different sample designs?
  • If weights are constructed, what variables should be used for post-stratification?
  • What other issues must be dealt with in weighting?  
  • If weights are to be used, what data does the survey need to collect to facilitate weighting?
Although these are questions all cell phone survey researchers should consider, to date there are not sufficient answers to them. Nevertheless, researchers who are using data gathered in RDD surveys in the U.S. that include cell phone numbers must answer such questions as best they can before they can decide how best to analyze their data. What follows is a discussion of issues to try to help researchers make these difficult decisions, until a time when more precise and definitive information about weighting in cell phone surveys becomes available.

Factors Affecting Answers to These Questions
Answers to the above questions depend on the population being studied and on the sample design used. These considerations apply when the study population comprises households, persons living in households, or a subset of those populations.
The household population can be divided into four groups based on telephone service:
Different sampling designs that affect the approach to weighting include:
  • Samples that are selected for the survey from landline and cell phone RDD frames, but screening is done so that any member of the target population has a nonzero probability of selection from only one of the frames.
  • Independent samples that are selected from RDD frames that overlap in their coverage (e.g., a landline frame and a cell phone frame) and there is no screening; thus some members of the study population (those with both cell and landline service) have a nonzero probability of selection from more than one frame.
  • Studies in which only a landline RDD frame is used, but weighting adjustments are desired to account for the fact that the frame excludes the cell-only group and those with no telephone service.
  • Studies in which the sample is selected only from a cell phone RDD frame. 
When Are Weights Required?
Weights would almost always be required if both cell and landline RDD frames are used, especially if those respondents having both types of service are interviewed from both frames.

However, there are a few instances when it may be permissible not to use weights. For example, weights might not be needed in a sample that uses only one frame and no attempt is made to generalize about those who could only be contacted via the other frame. But even in these cases, weights usually should be constructed if there are non-trivial differences in the probabilities of selection or if there is differential nonresponse across various groups of the population.

Another occasion when weights may not be required arises when advances in telecommunications technology lead to new modes of survey administration. In this case, comparing unweighted data across the old and new modes becomes a logical first step in determining how findings may differ and whether or not weighting methods, particularly for post- stratification, need to be substantially revised.
During this current period of uncertainty and experimentation in surveying cell phone numbers in the U.S., it is vitally important for researchers to describe how they constructed any weights used in their analyses or to describe the basis on which they decided not to weight, if that was their decision. Thus by contrasting results across studies that use different procedures, the survey community can begin to determine which procedures most effectively adjust for the kinds of errors that occur inevitably during the survey process and that can be addressed by weighting.

Different Approaches to Weighting for Different Sampling Designs
The approach to weighting will depend on the design. For example, consider a survey using only an RDD cell phone frame that does not seek to make inferences about individuals not having cell phone service. For this survey, one would weight to reflect any differences in probability of selection and differences in response among groups (perhaps defined by service provider or region). Post-stratification could also be used if there are appropriate population values available.

Another example would be a survey is that conducted with an RDD landline sample, but where the target population includes the cell-only and non-phone subgroups. In this case, in addition to adjusting for different probabilities of selection and response rates, the weights could incorporate factors that would weight up those with landline service interruptions and could post- stratify on factors related to inclusion/exclusion on the RDD landline frame.

As a third example, consider a dual frame survey, without overlaps. Here, an RDD frame could be used to interview the landline-only group and those with both types of service, and an RDD cell phone frame could be used for those with only cell service. In this case, weighting adjustments would be made within each frame, and each would be post-stratified so that the sum of the weights for each was proportional to its share of the overall population.
However, if the survey is not national in scope, data for such post-stratification may not be
available. In that case the researchers may have to rely on post-stratifying to characteristics known to be associated with phone ownership, such as age.
As a final example, consider a survey with overlapping RDD frames. This design presents the most difficulty in weighting. In addition to other weighting factors, the weights must account for the fact that some members of the target population (e.g., those with both cell and landline service) have a chance of being selected from either frame. Approaches to weighting such samples are still being refined and include (a) linear combinations (composite or “Hartley” estimators), (b) computing probabilities of selection so as to account for multiplicity, and (c) raking or post-stratification to estimated population totals for three usage groups (cell phone only; landline only; both cell phone and landline).

Other Issues Affecting Weighting RDD Cell Phone Samples
Several issues have arisen in weighting cell phone surveys, especially those that have employed an overlapping dual frame (RDD landline and RDD cell phone) approach. These issues include:
1. Differential overall response between the samples from the two frames; (nonresponse in the cell sample usually is higher than in the landline sample).
2. Differential response for mixed households (those that have both landline and cell service) between the samples from the two frames.
3. The fact that landline phones are considered to belong to a household, whereas U.S. cell phones are generally thought to belong to individuals; (while sharing of cell phones does occurs in a minority of U.S. households, no reliable estimates of this proportion have been reported).
4. The amount of information available about numbers on the RDD cell phone frame (e.g., geography and exchange level demographic estimates) is substantially less than that available for numbers on the RDD landline frame.
5. A lack of reliable data to post-stratify the RDD cell phone frame sample in the U.S. to populations parameters. This issue is especially problematic for surveys of geographic areas that are less than national in scope.

Gathering Data Needed to Determine Cell Phone Status and for Post-Stratification
In RDD telephone surveys that include cell phone numbers, it is advisable to gather data likely to differ between groups defined by telephone ownership and usage, such as age. Also, one should use variables for which reliable external estimates are available.

For weighting to be done accurately, certain data must be available about the target populations’ parameters and the survey samples’ characteristics. Thus, U.S. telephone researchers need to gather data in their questionnaire to facilitate this process by measuring those sample characteristics needed for proper weighting to be possible.

As of yet there is no consensus regarding how RDD cell phone samples should be weighted, especially when combining them with RDD landline samples. Therefore, there is also no consensus on exactly what questionnaire items are needed to support this process. To date, a mix of measures has been employed in RDD cell phone surveys for this purpose, including:
  • Has the respondent been reached on a landline or a cell phone?
  • Does the respondent have a landline telephone? (In RDD landline samples, respondents should be asked if they have a cell phone.)  
  • Is the cell phone on which the respondent was reached used/answered only or mostly by the respondent? If not, how many other eligible persons use/answer the phone?
  • Does the respondent have other cell phones? Considering all their personal telephone usage, how much do they use each of them?
  • For respondents with both a cell phone and a landline phone, what proportion of all of their incoming telephone calls are taken via each type of phone service?
  • How often is their cell phone turned on/off?
  • Is the cell phone used primarily or only for business purposes; what portion of their usage is for incoming business versus incoming personal calls?  
Factors to consider in selecting items for weighting purposes include the sample design (cell phone only, dual frame with screening for cell-only households/persons, or dual frame without screening) as well as the weighting parameters. Weighting to external estimates is most effective when items in the questionnaire replicate as closely as possible the manner in which the data were gathered in the external survey. Researchers conducting national (and in some cases regional) surveys may consider using telephone service estimates from the National Health Interview Survey (NHIS).13  The NHIS features a large, national area-probability sample that covers both telephone and non-telephone households. Data collection for the NHIS is continuous throughout the year, and parameter estimates for telephone service are published twice yearly (in December for interviews completed from January-June, and in May for interviews completed from July-December). For surveys making inference to smaller geographic areas, satisfactory parameter estimates may not be readily available. As such, researchers who are conducting non-national telephone surveys of the general population must recognize that weighting to inappropriate parameter estimates may not improve survey estimates and, in some cases, may increase error.

13 http://www.cdc.gov/nchs/nhis.htm

There are concerns about the reliability of many telephone service and usage questions.
For example, the term, “landline telephone,” is not a familiar term to everyone, and there is potential for some respondents to confuse cordless (mobile) landline telephones with cell phones. Estimating the proportion of calls made on a cell phone versus a landline phone may be very difficult for some respondents, or at least apt to lack precision.
There also are practical concerns regarding this battery of potential items. Considerable interviewing time would be required to be able to include all of these in a questionnaire, thereby likely necessitating a reduction in the number of substantive questions. These items are also apt to be uninteresting and potentially sensitive to many respondents, thus raising the chances for item nonresponse and even breakoffs.

These are matters that must be worked out so that valid measures can be used to gather the variables needed for weighting RDD cell phone respondents in the U.S. And yet, this must be done in ways that are reasonably feasible for researchers who need to conduct telephone interviews with those reached on cell phones.


As can be seen in this report, a great deal of important new research remains to be conducted in the U.S. before telephone survey researchers can conduct RDD surveys of persons reached on their cell phones with the confidence in the findings that is expected by the users of those data.

In light of this, there are few recommendations this Task Force believes can be made with confidence at this time. However, as a result of the developments discussed in this report and as an interim step in applying survey methods to cell phones in the U.S., the AAPOR Cell Phone Task Force recommends the following disclosure-related considerations:
1. All telephone surveys should disclose whether or not the sample includes only landline numbers, only cell phone numbers, or both, and how the numbers were selected from their respective frames.
2. All RDD telephone surveys with samples that contain cell phone numbers should fully disclose how any weights have been constructed and what population estimates have been used to post-stratify, recognizing that many such parameters are not available at sub-national levels.
3. RDD telephone surveys targeting subgroups in the U.S. with substantial percentages of adults who live in cell phone only households (e.g., 18 – 29 year olds; renters; and those below the poverty threshold) should sample cell phone numbers or, if this is not feasible, discuss how excluding cell phone numbers may affect the results


Blumberg, Stephen J. and Julian V. Luke. 2007. “Wireless Substitution: Early Release of Estimates Based on Data from the National Health Interview Survey, July – December 2006.” http://www.cdc.gov/nchs/nhis.htm.
Blumberg, Stephen J. and Julian V. Luke. 2007. “Coverage Bias in Traditional Telephone Surveys of Low-Income and Young Adults,” Public Opinion Quarterly 71: 734-749
Brick, J. Michael, Sarah Dipko, Stanley Presser, Clyde Tucker, and Yangyang Yuan. (2006). “Nonresponse Bias in a Dual Frame Sample of Cell and Landline Numbers,” Public Opinion Quarterly 70:780-793.
Brick, J. Michael, Pat D. Brick, Sarah Dipko, Stanley Presser, Clyde Tucker and Yangyang Yuan (2007). “Cell Phone Survey Feasibility in the U.S.: Sampling and Calling Cell Numbers Versus Landline Numbers,” Public Opinion Quarterly 71: 23-39.
Brick, J. Michael, Edwards, W. Sherman, and Sunghee Lee. 2007. Sampling Telephone Numbers and Adults, Interview Length, and Weighting in the California Health Interview Survey Cell Phone Pilot Study,” Public Opinion Quarterly 71: 793-813.
Callegaro, Mario, Charlotte Steeh, Trent D. Buskirk, Vasja Vehovar, Vesa Kuusela, and Linda Piekarski (2007). . “Fitting Disposition Codes to Mobile Phone Surveys: Experiences from Studies in Finland, Slovenia, and the United States,” Journal of the Royal Statistical Society Series A 170:647-670.
Ehlen, John, and Patrick Ehlen. 2007. “Cellular-Only Substitution in the United States as Lifestyle Adoption: Implications for Telephone Survey Coverage,” Public Opinion Quarterly 71: 717-733.
Fleeman, Anna. 2006. "Merging Cellular and Landline RDD Sample Frames: A Series of Three Cell Phone Studies." Paper presented at the Second International Conference on Telephone Survey Methodology, Miami, FL.
Fleeman, Anna and Dan Estersohn. 2006. “Geographic Controls of a Cell Phone Sample.” Paper presented at the 62nd annual conference of the American Association of Public Opinion Research, Montreal, QC.
Groves, Robert M. and Katherine McGonagle. (2001). “A Theory-Guided Interviewer Training Protocol Regarding Survey Participation.” Journal of Official Statistics, Vol. 17 Issue 2, 249- 265.
Keeter, Scott. 2006. “The Cell Phone Challenge to Survey Research.” http://people-press.org/reports/display.php3?ReportID=276. Accessed October 22, 2006.
Keeter, Scott. 2006. "The Impact of Cell Phone Noncoverage on Polling in the 2004 Presidential Election." Public Opinion Quarterly 70:88-98.
Keeter, Scott, Kennedy, Courtney, Clark, April, Tompson, Trevor, and Mike Mokrzycki. 2007. What's Missing from National Landline RDD Surveys? The Impact of the Growing Cell-Only Population,” Public Opinion Quarterly 71: 772-792.
Kennedy, Courtney. 2007. Evaluating the Effects of Screening for Telephone Service in Dual Frame RDD Surveys,” Public Opinion Quarterly 71: 750-771.
Kuusela, V., Callegaro, M., & Vehovar, V. (2007). “The influence of mobile telephones on telephone surveys.” In J. Lepkowski, C. Tucker, M. Brick, E. De Leeuw, L. Japec,
P. J. Lavrakas, M. Link & R. Sangster (Eds.), Advances in Telephone Survey Methodology.
Hoboken, NJ: Wiley. 87-112.
Mayer, Thomas S. & Eileen O’Brien. (2001). “Interviewer Refusal Aversion Training to Increase Survey Participation.” Paper presented at the Joint Statistical Meeting of the American Statistical Association, Atlanta, GA.
Lepkowski, J. M., Tucker, C., Brick, M., De Leeuw, E., Japec, L., Lavrakas, P. J., Link, M., and Sangster, R. (2007). Advances in Telephone Survey Methodology. Hoboken, NJ: Wiley.
Lavrakas, Paul, and Charles Shuttles. 2005. "Cell Phone Sampling Summit II Statements on Accounting for Cell Phones in Telephone Survey Research in the U.S." Available online as of December 17, 2007 at http://www.nielsenmedia.com/cellphonesummit/statements.html.
Lavrakas, Paul J., Shuttles, Charles D., Steeh, Charlotte, and Howard Fienberg. 2007. “The State of Surveying Cell Phone Numbers in the United States: 2007 and Beyond,Public Opinion Quarterly 71: 840-854.
Link, M., M. Battaglia, M. Frankel, L. Osborn, and A. Mokdad. (2007)."Conducting Public Health Surveys over Cell Phones: The Behavioral Risk Factor Surveillance System (BRFSS) Experience." Paper presented at the 62nd annual conference of the American Association for Public Opinion Research, Anaheim, CA.
Link, Michael W., Battaglia, Michael P., Frankel, Martin R., Osbom, Larry, and Ali H. Mokdad. 2007. “Reaching the U.S. Cell Phone Generation: Comparison of Cell Phone Survey Results with an Ongoing Landline Telephone Survey,” Public Opinion Quarterly 71: 814-839.
Shuttles, Charles D., Jennifer S. Welch, J. Brooke Hoover, and Paul J. Lavrakas. (2003). “Countering Nonresponse Through Interviewer Training: Avoiding Refusals Training ART II.” Paper presented at the 58th annual conference of the American Association of Public Opinion Research, Nashville, TN.
Steeh, Charlotte. 2005. “Quality Assessed: Cellular Phone Surveys Versus Traditional Telephone Surveys.” Paper presented at the 60th annual conference of the American Association for Public Opinion Research, Miami, FL.
Steeh, Charlotte and Linda Piekarski. 2007. “Accommodating New Technologies: Mobile and VoIP Communication.” In Advances in Telephone Survey Methodology edited by James M. Lepkowski, Clyde Tucker, J. Michael Brick, Edith de Leeuw, Lilli Japec, Paul J. Lavrakas, Michael W. Link, and Roberta L. Sangster. New York: Wiley; 423-448.
Steeh, Charlotte, Trent Buskirk, and Mario Callegaro. 2007. "Use of Text Messages in U.S. Mobile Phone Surveys." Field Methods 19:59-75.
Strayer, D. L., & Drews, F. A. (2007). “Cell-Phone induced driver distraction.” Current Directions in Psychological Science, 16, 128-131.
Strayer, D. L., Drews, F. A., Crouch, D. J., & Johnston, W. A. (2005). “Why do cell phone conversations interfere with driving.” In R. W. Walker & D. J. Herrmann (Eds.), Cognitive Technology. Essays on the transformation of thought and society (pp. 51-68). Jefferson, NC: McFarland.
Tucker, Clyde, J. Michael Brick, and Brian Meekins. 2007. "Household Telephone Service and Usage Patterns in the United States in 2004: Implications for Telephone Samples." Public Opinion Quarterly 71:3-22.


Partly adapted from SSI’s “Glossary of Research and Sampling Terms for Internet, Telephone, and Mail Surveys.” http://www.surveysampling.com/glossaryterms.pdf
1000-banks or 1000-blocks              
See “Telephone Number Components”.
100-banks or 100-blocks                 
See “Telephone Number Components”.
Area Code                                    
See “Telephone Number Components”.
An electronic device that can automatically dial telephone numbers to communicate between any two points in the telephone network. Once the call has been established the autodialer can provide verbal messages or transmit digital data (like SMS messages) to the called party. A predictive dialer is a computerized system that automatically dials batches of telephone numbers for connection to interviewers or telemarketing agents. They can also reject numbers that do not make a connection. Predictive dialers are widely used in call centers. The FCC has placed a variety of restrictions on the use of such devices, including a general prohibition that such computerized equipment cannot be used by anyone to initiate calls to wireless devices (cell phones, pagers, etc).
See “Telcordia/Bellcore”.
Cell Phone                            
A generic term for a portable wireless electronic device used for
wireless communication. Current cell phones can support many additional services such as SMS for text messaging, email, packet switching for access to the Internet, and MMS for sending and receiving photos and video. Outside the United States these devices are commonly referred to as mobile phones.
A form of wireless communication where telephone calls connect to a cellular network of base stations (cell sites), which is in turn interconnect to the public switched telephone network. Most Cellular phones operate in the 824-894 MHz frequency range. See “Cell Phone” and “Wireless”.
Dual Frame Sampling           
Occurs when a sample is selected from two potentially overlapping
frames. For example, a sample of listed telephone numbers supplemented with a RDD sample. The overlap, units that appear in both frames, must be identified and accounted for.
See “Telephone Number Components”.
A telephone line where communications travel through a solid medium, either metal (copper) wire or fiber optic cable.
Synonymous with “wireline”.
Mobile Phone                        
Mobile is the common term used outside of the United States and Canada to refer to wireless services. Global Mobile Service (GSM) phones are able to operate in three or four different frequency bands. GSM phones require SIM (Subscriber Identity Module) cards. These removable/interchangeable cards store the service- subscriber key used to identify a mobile phone and allows users to change phones by simply switching the SIM card from one mobile phone to another or switching SIM cards (service providers) on a single phone. Unlike most US wireless services, they work almost anywhere in the world. See “Cellular Phone” and “Personal Communications Service (PCS)”.
Numbering Plan Area (NPA)
See “Telephone Number Components”.
Personal Communications Service (PCS)
The name of the wireless service that uses the 1900-MHz radio band for digital mobile phone services in Canada and the United States. PCS services include both voice and advanced two-way data capabilities that are generally available on small, mobile multifunction devices. The FCC and other wireless industry representatives often refer to these services as "Mobile Telephone Services" and "Mobile Data Services." Many broadband PCS licensees offer these services in competition with existing cellular licensees.

Old Bellcore/Telcordia acronym for “Plain Old Telephone Service,” or telephone service carried over landlines as opposed to the airwaves (wireless).
Predictive Dialer                    
See “Autodialer”.
See “Telephone Number Components”.
Random Digit Dialing (RDD)
A method of reducing sampling frame error that involves the use of randomly generated numbers for a telephone survey, instead of relying on telephone directories or other lists of numbers that may exclude certain types of consumers.
See “Telephone Number Components”.
Formerly Bellcore, it is a telecommunications research and development (R&D) company based in the United States. Bellcore was a consortium established by the Regional Bell Operating Companies and the 22 local Bell Operating Companies upon their separation from AT&T in 1984 divestiture of the Bell system.
Bellcore provided joint R&D, standards setting, and centralized government point-of-contact functions. Today Telcordia provides core software and services that all communications service providers rely on to design, operate and support their networks, and deliver and administer telecommunications services. Telcordia databases provide a wide variety of data associated with all elements of the switching network. Their Terminating Point Master database contains information on all prefixes and most 1000-blocks in use in the North American Numbering Plan. There is one record for each prefix with a block-id of A, referred to as A-records. This record has information for the prefix as a whole and its parent operating company. In areas where 1000 blocks pooling has been mandated, there are additional records for each exchange, one for each 1000 block in use. The block-id is a number that designates the 1000-block (block-id 1 = 1000 block 1 and all suffix numbers in the range (1000-1999).
Telephone Number Components
Telephone Number Components

North American Numbering Plan is the integrated telephone numbering plan covering the United States and its territories, Canada, Bermuda, and 16 Caribbean nations. It is a system of three-digit numbers
Area Code is the term associated first three digits of a 10-digit telephone number that allows communications networks to direct telephone calls to particular regions on the network where they are further routed to local networks. It is also known as the NPA or Numbering Plan Area. An area code can cover an entire state or a city or part of a city. In certain areas of the plan, multiple area codes can service the same area (overlays).
Prefix is the term associated with the second set of three digits of a 10-digit telephone numbers. This set of numbers allows communications networks to direct calls to more local areas within the larger area code. Each prefix has been assigned to a single Telephone Operating Company, a company that has been licensed by the FCC to provide telecommunications services over the Public Switched Telephone Network. Every prefix has 10,000 possible phone number combinations (0000-9999).
Suffix is the term associated with the final four digits of a 10-digit telephone numbers. This set of numbers allows communications networks to direct calls to the switch associated with the end user. The suffix can be further segmented into blocks or banks of consecutive numbers.
1000-blocks or 1000-banks are blocks of 1,000 consecutive suffix numbers starting with the same digit (0000-0999). Within a prefix, 1000-blocks can be assigned to telephone operating companies other that the company responsible for the prefix.
100-blocks or 100-banks are blocks of 100 consecutive suffix
numbers starting with the same two digits (1100-1199). Analysis of listed telephone numbers in 100-blocks is used to create list- assisted telephone frames.
Exchange is a term that is frequently used in place of the term prefix, but an exchange is actually the geographic area serviced by a prefix or set of prefixes. For example 203-929 and 203-926 are two of the many prefixes that service the Huntington CT exchange area. Prefixes are numbers but exchanges are usually associated with a place name such as Huntington CT. Exchanges usually have a single building where all the wires in all the prefixes come together and from which calls are directed to and from users in those prefixes. A set of geographic coordinates associated with this building have traditionally been used to determine calling areas and the cost of making a phone call (local vs. long distance). For this reason, exchanges are sometimes referred to as Billing Centers or Rate Centers of Wire Centers. In prefixes that have multiple service providers and different types of service (POTS+cellular+broadband) individual 1000 blocks may have place names and rate center coordinates that are different from those associated with the prefix.
Text Messaging
A telecommunications protocol that allows the sending of "short" (160 characters or less) text messages, person-to-person messaging. It is available on most digital mobile phones and some personal digital assistants with wireless telecommunications. The individual messages which are sent are called text messages, SMSes, texts, or txts. 
Voice Over Internet Protocol. VOIP providers basically reroute phone calls over the internet. VOIP service (cable, DSL, etc) is still primarily landline service and VOIP numbers are normally assigned in landline prefixes. VOIP companies provide a special modem connected to the internet into which the subscribers plug a regular landline phone. Some wireless carriers offer VOIP using a specially equipped cell phone assigned a number from their set of cellular prefixes. Thus it appears that VOIP is not a separate mode but can be incorporated in either service.
Subscribers can keep their existing phone number and switch it to VOIP. Under certain circumstances they may be able to keep their VOIP number when moving to a different area code and get an in- bound telephone number from a different area code through the use of what is commonly referred to as a virtual phone number. Thus there is a potential loss in geographic precision that is also characteristic of cell phone numbers without VOIP. At the moment there are no indications of subscribers treating their VOIP phone differently than they would treat a landline or cellular phone. For this reason VOIP numbers can be dialed as regular landline or cell numbers.

Is a telephone line where communications travel through the airwaves rather than over wire or fiber optic cable. This term is regularly used in the telecommunications industry, particularly by government agencies, when referring to non-landline telephone service and includes cellular, PCS, Mobile and Paging services. See “Cellular” and “Cell Phone”.
Synonymous with “landline”. Is regularly used in the telecommunications industry, particularly by government agencies, when referring to landline telephone service.