AAPOR
The leading association
of public opinion and
survey research professionals
American Association for Public Opinion Research

Mobile Technologies for Conducting, Augmenting and Potentially Replacing Surveys

Mobile Technologies for Conducting, Augmenting and Potentially Replacing Surveys: Report of the AAPOR Task Force on Emerging Technologies in Public Opinion Research
 
April 25, 2014
 
Report Authors:
 
Michael W. Link, Co-Chair, Nielsen
Joe Murphy, Co-Chair, RTI International
Michael F. Schober, New School for Social Research
Trent D. Buskirk, Marketing Systems Group
Jennifer Hunter Childs, U.S. Census Bureau
Casey Langer Tesfaye, American Institute of Physics
 
Additional Task Force Members:
 
Mario Callegaro, Google
Jon Cohen, SurveyMonkey
Elizabeth Dean, RTI International
Paul Harwood, Twitter

Josh Pasek, University of Michigan
Michael Stern, NORC
 
Acknowledgements: We thank Kathy Ashenfelter for her work on an earlier draft of this report.
 
Table of Contents
 
Executive Summary  
 
1.0 Background
 
1.1 AAPOR Council Charge and Report Focus  
1.2 AAPOR Reports on Related Topics  
 
2.0 Mobile Technologies and Survey Research
 
2.1 Mobile Online Survey  
2.2 Experiments with Mobile Surveys  
2.3 SMS and MMS Technologies  
2.4 Mobile Survey Design Considerations  
 
3.0 Beyond Surveys - Other Potential Data Collection Features
 
3.1 Location/Geopositioning (GPS)  
3.2 Scanning / QR and Barcode Readers  
3.3 Visual Data Capture  
3.4 Bluetooth-Enabled Devices and Related Technologies  
3.5 Mobile Applications as Infrastructure for Multimode Data Collection  
 
4.0 Privacy Considerations
 
 
5.0 Future Research
 
 
6.0 Conclusion
 
 
References
 

Executive Summary
 
Public opinion research is entering a new era, one in which traditional survey research may play a less dominant role. The proliferation of new technologies, such as mobile devices and social media platforms, are changing the societal landscape across which public opinion researchers operate. The ways in which people both access and share information about opinions, attitudes, and behaviors have gone through perhaps a greater transformation in the last decade than in any previous point in history and this trend appears likely to continue.  The rapid adoption of smartphones and ubiquity of social media are interconnected trends which may provide researchers with new data collection tools and alternative sources of information to augment or, in some cases, provide alternatives to more traditional data collection methods. However, this brave new world is not without its share of issues and pitfalls – technological, statistical, methodological, and ethical.

As the leading association of public opinion research professionals, AAPOR is uniquely situated to examine and assess the potential impact of these “emerging technologies” on the broader discipline and industry of opinion research. In September 2012, AAPOR Council approved the formation of the Emerging Technologies Task Force with the goal of focusing on two critical areas: smartphones as data collection vehicles and social media as platform and information source. The purposes of the task force are to:
  • define and delineate the scope and landscape of each area;
  • describe the potential impact in terms of quality, efficiency, timeliness and analytic reach;
  • discuss opportunities and challenges based on available research;
  • delineate some of the key legal and ethical considerations; and
  • detail the gaps in our understanding and propose avenues of future research.
The report here examines the potential impact of mobile technologies on public opinion research – as a vehicle for facilitating some aspect of the survey research process (i.e., recruitment, questionnaire administration, reducing burden, etc.) and/or augmenting or replacing traditional survey research methods (i.e., location data, visual data, and the like).1

1 Note that a companion report, entitled “Social Media in Public Opinion and Survey Research” (available at www.aapor.org) provides an overview of social media technologies, recognizing that some mobile technologies involve social networking and social media are often accessed via mobile devices.
 
USE OF MOBILE DEVICES
The emergence of mobile devices -- with a host of integrated features including voice, photography, video, text, email, GPS, apps, and others -- has opened the door to a new generation of measurement tools for those who study public opinion, attitudes and behaviors as well as other sociological phenomena. From a mobile device ownership perspective, estimates as of mid-2013 show that 91% of the adults in the United States own a cellular telephone, with 61% of adults indicating they own a “smartphone” (Smith, 2013). Additionally, 35% of Americans aged 16 or older owns a tablet computer (like an iPad or Galaxy). Sixty-three percent of adult cell phone owners report using their phones to go online, with 34% of these cell internet users saying they go online mostly using their cell phone (Duggan and Smith, 2013).

In terms of coverage and use, however, mobile technology adoption is not uniform across the population. Smartphone ownership, for example tends to be highest among those with incomes above $75,000 (78%), college educated (70%), and younger adults (~80% of those aged 18-34 owns a smartphone; Smith, 2013). Additionally, different types of people prefer or purchase different types of phones, with Android models being more popular among younger adults and African-Americans and iPhone and Blackberry being more prevalent among college educated and those with higher incomes (Smith 2013). Because these phone models run on different operating systems, these differences have an impact on how surveys are displayed on a mobile device, the types of features they support, and the ways in which people interact with the phone. In other words, the mobile world is a messy one from a measurement perspective, with differential coverage and usage across the population and various platforms being accessed by these individuals.
  
MOBILE SURVEYS
One thing that does seem clear is that the use of mobile phones for survey research and related data collection uses is not simply an extension of previous methodologies, but combines elements of traditional computer-assisted interviewing (CAI) systems, online data collection, and additional, new elements. For instance, smartphones and tablets can support the administration of surveys in a number of ways -- online/web surveys, application (or “app”)-based surveys, voice interviews, or interviews via text messaging. All of these modes can operate in ways similar to their traditional phone/PC/laptop-based counterparts, but they have their own challenges on mobile devices, for example with regards to screen size and usability, variability of display or functions across different operating system, as well as the ability to collect data in different modes and formats (such as text, email, visuals, GPS, Bluetooth enabled devices, etc.) within a single survey.

The current state of knowledge about the dynamics of mobile surveys is less advanced than is needed for a full understanding; different studies have used different populations of respondents and have deployed mobile surveys with different features, so understanding exactly what may be comparable and is generalizable is an evolving picture.  In addition, the fact that mobile adoption and technological experiences are changing so quickly makes it challenging to know how a finding from three years ago would apply to the exact same population today, much less a different population. The report offers, therefore a set of suggestions (as opposed to a more firmly-established set of “best practices”) to guide researchers in the conduct of mobile surveys based on lessons learned as well as studies deployed on current technology platforms:
  
  • Match the Tools and Task to the Respondents: Given that people differ in the types of technologies they adopt and their familiarity and ability to use these technologies, researchers need to carefully consider which technologies to deploy with their population of interest (Link and Buskirk, 2012). Some studies have shown, for example, that persons aged 50 and older, particularly those with little or no regular experience using a smartphone, have greater difficulty utilizing certain types of mobile data collection apps (Link, Lai and Bristol, 2013). In making a decision on the best technology or mode to use, researchers are encouraged to consider (a) the data that need to be captured, (b) fit with the target population’s skills and ability to respond via this mode, and (c) the best way to optimize presentation of the survey content.
  • Follow Established Guidelines for Contacting Cell Phones: When considering the collection of survey data via a mobile device, many of the concerns are the same as those related to telephone interviews conducted via cell phones. These include, but are not limited to: ensuring that the respondent is in a safe location (e.g., not driving), that they are able to speak or utilize the data entry features of mobile data collection privately (for confidentiality reasons), and that the respondent is in the area and time zone expected at the time of sampling.
  • Recognize If You Are Conducting Online Surveys, You Are Conducting Mobile Surveys: A non-ignorable and growing percentage of respondents are now accessing online surveys via their mobile browsers (with estimates ranging from 8 - 23% depending on the study), resulting in higher abandonment rates and potentially greater measurement error among these mobile respondents. Mobile optimization of surveys is recommended as is the utilization of paradata -- in particular user agent strings -- which allows researchers to know the type of platform and browser being used by the respondent and to direct her or him to the most appropriate version of the survey.
  • Note that for Survey Length/Layout/Format - Keep It Short and Simple: While there is no clear “cut off” when it comes to length, the adage “shorter is better” applies to mobile surveys for a number of reasons: (1) screen size and keyboard/touchpad considerations make can make it more difficult to complete a survey (at least comfortably and without error); (2) respondents are used to making regular, but brief uses of their smartphones (e.g., texting, looking up directions, scrolling through apps) and so shorter surveys fit more naturally with the way in which the devices are normally utilized; and (3) when using browser-based surveys, connectivity can be an issue when respondents encounter mobile “dead spots.” Additionally, given the relatively small screen size of mobile phones, researchers cannot easily utilize many of the question formatting styles often seen in computer-based online surveys, such as grid questions or long lists of response options. Limited screen size also requires compromise and judgment when using logos, progress bars, disclaimers, and help links in mobile surveys.
  • Understand the Limits and Nuances of Mobile as a Multimode Platform: Smartphones contain a number of features that can be utilized for survey data collection, including voice to support more traditional CATI interviewing; SMS texting for survey invitations, communication with respondents, and short surveys; online surveys accessed via a mobile browsers; and app-based surveys (Link and Buskirk, 2012). Smartphones also support an array of other data collection techniques, such as GPS/location, barcode/QR code scanning, visual data capture, and Bluetooth device communication. Researchers need to educate themselves about both the technological aspects of these features, as well as the benefits, challenges, and potential errors in using these various features for collecting respondent data.
  • Remember that Pretesting is Essential! -- As with any survey or data collection tool, pretesting of the instruments used is a must. This includes everything from the user interface, to the functionality of the instrument or app, to the quality and completeness of the data exported for analysis -- and testing these elements across various operating systems and smartphone models. Although there is a great deal of functionality that is consistent across smartphones, there is enough inconsistency to require extensive pre- testing across multiple platforms. Smartphone emulators can help with this testing.

BEYOND SURVEYS
While much of the research to date on the use of mobile technologies for data collection has focused on administering surveys via mobile devices, there are a wide array of applications and features available on these devices which can augment and in some cases even replace survey data. In many respects, smartphones and tablets can be considered “multimode” platforms because they facilitate more than one form of data collection. The report examines five key technologies that are currently in use by researchers to extend or replace certain aspects of survey data collection:
 
Location / Geopositioning (GPS): Information about the physical location of a smartphone presumably also pertains to the respondent. GPS is the technology most commonly used by researchers to identify a respondent’s location via a mobile device and track his/her movements or travel. The data captured may include specific places, routes taken from one location to another, distances travelled, and timing of travel. Before GPS, such studies had typically been conducted using self-reports by respondents via a series of recall questions or via an activity, time-use, or transportation diary. Responses to self-reported recall studies are subject, however, to a number of potential biases, including faulty memory, refusal to provide an accurate
response, survey context and format, and social desirability effects. In contrast, GPS on a mobile device can provide more complete, accurate and timely data than self-reported methods. GPS is also being tested with field staff as part of quality control efforts to ensure interviewers are going to the correct sampled addresses and taking optimal travel routes, with varying degrees of success (Olson and Wagner, 2013).
 
Scanning / Quick Response (QR) & Barcode Readers: Barcodes and related technologies have been available and utilized by survey researchers for quite some time using specialized scanning devices, initially for inventory and logistical record-keeping (such as paper form check- in) but more recently for use in respondent recruitment and even measurement systems. With the prevalence of QR and barcode reader apps for smartphones, these codes can be used for a broader array of respondent-enabled data collection activities, such as collecting information on consumer goods or other items containing a barcode or directing respondents to a URL or website for additional study information, study registration, or even an online survey.
 
Visual Data Capture: Capturing photos or video to accompany and enhance survey findings is not a new concept; however, the digital technology revolution has made the capture of visuals far easier, quicker and less expensive. Because mobile phones are constantly carried, camera phones allow respondents to capture visual data at any time. Additionally, the rapid growth of social media platforms such as Instagram, Facebook and Twitter have led to dramatic changes in societal behavior, and users are increasingly more comfortable capturing, editing and sharing photos and videos. Collecting visual data in conjunction with survey information can serve several useful functions: (a) adding context to survey data; (b) providing information or referents that can be coded; (c) providing a potential means of improving respondent engagement with a survey; and (d) as a training tool for respondents participating in long-term or more complex data collection efforts.
 
Bluetooth-Enabled Devices and Related Technologies: Smartphones can extend or augment survey data collection in their ability to wirelessly connect to an array of external devices, transmitting data of various types back and forth. The most commonly used technology standard for this kind of high-speed data transmission is Bluetooth. Smartphones are increasingly used in health and medical studies as a conduit for receiving biohealth information from portable medical devices (e.g., blood pressure, glucose and pulse oximeter monitors, weight scales) and mobile sensors (e.g., physical activity; accelerometer counts, heart rate, respiration rate, pulse pressure via chest or armbands, wireless electrodes; Gregoski et al., 2012). Sensing technologies integrated into smartphones have also been used to reduce measurement error in studies of health and environmental exposure.
 
Mobile Applications as Infrastructure for Multimodal Data Collection: Mobile software applications, or “apps,” are now an integral component of every smartphone. In contrast to using a browser on a mobile device (the “mobile web”), apps can typically take greater advantage of the native capabilities of a smartphone, such as camera, microphone, GPS, scanner, etc., and pull them together into a single interface. In essence, mobile apps can serve as the interface and skeletal infrastructure for a multimode data collection device. The use of multimode data collection apps is not simply the next stage in evolution of CAI, but rather a species unto itself, with elements of CAI but also a new set of user expectations. When we consider that much of the population now has considerable experience with mobile apps, this experience can be expected to shape their expectations for how an app should operate, including ease of use, intuitive interface, speed, usefulness or utility, and use of native smartphone features.  Researchers need to be aware of these societal trends and expectations and develop data collection tools that are in line with (or not too far out of line from) these emerging norms.
 
PRIVACY
Researchers cannot delve into the world of mobile technologies without an understanding of some of the privacy implications these technologies may have for both respondents and the researchers themselves. While public behaviors and attitudes about data privacy may be complex and at times contradictory, the path for researchers is clear: we need to ensure the protection of our respondents’ private data through every phase of our studies -- and beyond. Protecting privacy is not a simple, one-time process, but a complex, on-going endeavor. Even when a study is completed, there may be data or residual information that requires protecting. The clear path for researchers is to design and implement studies that continually protect respondent information and to be aware of the evolving norms and concerns in the geographic areas and among the populations they are studying.

The difficulty in regulating privacy today is the speed of technological innovation, which makes large volumes of information easier to access, but also makes understanding how data is being shared and how to control it more difficult for members of the population. It is the researchers’ responsibility to protect the information collected from respondents and to inform respondents about the potential risks of the data collection effort and uses of the data.  At times this begins with researchers’ developing a better understanding of the complexities of the data collection task and the risks associated with the technologies used to complete those tasks.

Use of smartphones for data collection raises other specific issues when researchers utilize some of the additional functionality of the mobile devices, for example, the use of GPS to identify a respondents location and track their movements. Location is a form of personally identifying information because it is a part of the respondent’s physical context. Likewise, the collection of visual data via mobile devices raises some new challenges for researchers, such as the risk of inadvertent exposure of personally identifiable information; location identification via geotags embedded in many digital picture files these days; or, the chance that others, who have not given consent for the study, are captured in the photo -- either directly or in the background. In the end, as with any study, researchers should follow the philosophy of “Do No Harm,” developing study designs, protocols, and technologies that ensure respondents will not be harmed in any way or adversely affected as a result of participating (knowingly or unknowingly) in the research. Our research (and business) depends on the good will and trust of the public -- it is every researcher’s obligation to protect that trust.
 
FUTURE RESEARCH
Based on our review of the current state of the field, there are some areas that should be highlighted for future research:
 
  • Despite the many concerns related to adopting mobile technologies for data collection, many of the longstanding principles researchers need to consider are likely to persist (Schober and Conrad, 2008). These include minimizing coverage error, sampling error, and measurement error as well as specific areas like reducing the likelihood of respondents’ least effort and satisficing strategies, promoting accurate comprehension of the data collection task, as well as considering alternate possible single-mode and multimodal data collection possibilities. In other words, many of the concerns in today’s environment are the same as prior years and will require ongoing inquiry.
  • Focusing on the widespread utility of mobile devices for data collection, there is still a question as to whether or not mobile is a niche methodology.  It does appear to be a requirement in order to cover the increase in people taking online surveys via mobile devices and for specialty panels, but does it offer modes of collection robust enough for a general population survey? This remains to be seen.
  • One area of great promise with mobile is the ability to capture data “in-the- moment,” including brief surveys via text, mobile web or app, pictures or videos, scanning information, GPS and the like. This has been an area of some research already, but with somewhat mixed results. Several key questions are still unanswered, such as: Does capturing survey data at the time of a certain behavior or thought result in better quality data than those obtained via a recall survey?; If so, at what time interval is in-the-moment better than recall -- hours, days, week after the event or interest?; or, Does in-the-moment capture actually lead to greater non-response if viewed as more of a burden or disruption by the respondent?
  • There is also a need for more assessments of auxiliary data collection capabilities -- GPS, scanning, visual data, and wireless devices connected to mobile to name a few. While much is known about the mechanics of these various technologies, there is little published about the uses of these tools as data collection devices to augment or replace surveys or specific survey items. Such studies are needed in terms of respondent cooperation and compliance, data quality, and potential sources of error.
  • There is a need to develop best practices based on a growing number of methodologies being used, yet there are still few clear findings to guide such an effort. Among the studies that exist, there are mixed findings on many issues. This is due in part to different study designs, but also due to changes in mobile technology over time as well as “societal learning” and growing comfort with these devices and their many features. It is important to keep in mind that rapid changes in the technology itself may confound evaluation of findings from separate studies appearing within a 1-2 year time frame.
  • Finally, the field requires better understanding of the growing concerns related to privacy and security of data transfers with mobile technologies. These are needed not only to protect respondent but also to craft more understandable and effective consent procedures, statements of risks, and similar documentation.
 
We are in an era of rapid and continuous change that shows no signs of abating. Mobile technologies provide not only opportunities and challenges for researchers, but are also changing the very attitudes, opinions, behaviors and expectations of those we study. These new specialized tools also have their own “rules,” many of which researchers are still trying to figure out. Use is very dependent (as with any measurement tool) on what we want to know and need to measure -- learning to apply the right technology to the problem at hand. In utilizing these new approaches, educate yourself! Then share your findings and lessons with the field -- that is how we all learn and progress in this new era.

REPORT

1.0 Background
 
Public opinion research is entering a new era, one in which traditional survey research may play a less dominant role.  In-person, telephone, mail, and web surveys have thus far been the major tools for capturing and measuring opinions, behaviors and attitudes, but they may no longer be the only options and the field may be undergoing a more radical transformation. Major transitions in the field are rare, but not new. For the early part of its history, the conduct of opinion research was very much a manual effort. Surveys were largely conducted either face-to- face or via mailing lists. Questions and responses were recorded on paper forms, which were then key-entered after the fact. Fielding periods, therefore, tended to be longer and the potential for error (particularly human error) was high. In the mid-1970s, the personal computer revolution ushered in the Computer Assisted Interviewing (CAI) era. Computer software provided researchers with a number of critical advantages that transformed the methods of the discipline: longer, more complex questionnaires, built-in range checks and other quality control features, automatic delivery and tracking of samples, and near immediate access to data for analysis. The CAI era changed the way nearly every mode of survey was conducted (mail, field, telephone) in one manner or another and also introduced new ways of collecting survey data, such as via the Internet and audio computer-assisted self-interviewing (ACASI). Today, the proliferation of new technologies, in particular mobile devices and social media platforms, are bringing about a similar transformation in public opinion research.
 
Rapid advancements in communications and database technologies are changing the societal landscape across which public opinion and survey researchers operate. In particular, the ways in which people both access and share information about attitudes, opinions, and behaviors have gone through perhaps a greater transformation in the last decade than in any previous point in history and this trend appears likely to continue.  The rapid adoption of smartphones and ubiquity of social media are interconnected trends which may provide researchers with new data collection tools and alternative sources of information to augment or, in some cases, provide alternatives to more traditional data collection methods. For instance, mobile app-based tools can provide “in-the-moment” data throughout the day, including location and “trigger-based” survey data, respondent location via Global Positioning Software (GPS), and collection of visual data, while social networking systems such as Facebook and Twitter are potential platforms for primary data collection as well as rich sources of opinion data for secondary analysis. However, this brave new world is not without its share of issues and pitfalls – technological, statistical, methodological, and ethical.
 
1.1 AAPOR Council Charge and Report Focus
 
As the leading association of public opinion and survey research professionals, AAPOR is uniquely situated to examine and assess the potential impact of these “emerging technologies” on the broader discipline and industry of opinion research. In September 2012, AAPOR Council approved the formation of a task force to assess the opportunities and challenges emerging mobile and social media technologies might have on the fields of public opinion and survey research.

The AAPOR Emerging Technologies Task Force was first convened in October 2012 with the goal of focusing on two interconnected areas: smartphones as data collection vehicles and social media as platform and information source. These areas appear “ripe” for investigation, given that (1) each has widespread visibility and recognition within the industry as important new areas of development, (2) each area is already having an effect in many quarters of the survey discipline and related fields, and (3) there is sufficient initial empirical information within each area to allow us to begin assessing the relative merits and drawbacks of these potential approaches. The purposes of the task force are as follows:
  • define and delineate the scope and landscape of each area;
  • describe the potential impact in terms of quality, efficiency, timeliness and analytic reach;
  • discuss potential opportunities and challenges based on the empirical research available to date;
  • delineate some of the key legal and ethical considerations; and
  • detail the gaps in our understanding and propose avenues of future research.  
The task force did not delve deeply into detailed operational “how to” lessons, unless they were germane to assessing the research in these areas. Future task forces may choose to explore these types of details more fully as operational procedures become more mature.
 
This report focuses on the role of mobile technology in the collection and understanding of opinions, attitudes and behaviors. Note that a companion report, entitled “Social Media in Public Opinion and Survey Research” (available at www.aapor.org) provides an overview of social media technologies, recognizing that some mobile technologies involve social networking and social media are often accessed via mobile devices.  The report here examines the potential impact of mobile technologies on survey research – as a vehicle for facilitating some aspect of the survey research process (i.e., sampling, recruitment, questionnaire administration, reducing burden, etc.) vs. augmenting or replacing traditional survey research methods (i.e., location data, visual data, analyses of social exchanges, and the like).

In terms of organization, the remainder of the report has four basic sections: (1) conducting surveys using mobile devices like smartphones and tablets; (2) use of mobile features (GPS, scanning, visuals, Bluetooth, and apps) to supplement survey research through the collection of auxiliary data; (3) ethical and legal considerations for research in this area; and, (4) directions for research that may further our understanding of the uses of these emerging technologies in public opinion research.

This report is designed to inform those who study public opinions, attitudes and/or behaviors or have an interest in public opinion research, including those involved in the collection and/or analysis of data as well as policymakers, members of the media, and the general public. The report should be viewed as a “living document” in that it represents the state of the discipline at a particular point in time. Given the incredible speed of change in this area, how quickly new technologies are being developed, and the level of on-going research, both theoretical and practical, that is currently underway, we fully anticipate that the report will require updating or even more extensive revision in the future.
  
1.2 AAPOR Reports on Related Topics
 
This report overlaps some of the ground covered by previous AAPOR Task Force reports, most notably:
  • 2010 Cell Phone Task Force Report (2010)
  • Opt-in Online Panel Task Force Report (2010)
  • Non-probability Sampling Task Force Report (2013)
Where possible, we have attempted to reduce any redundancies with these prior efforts, except in places where there is either new information or where it is critical to the understanding of issues and points raised in this report. We encourage those interested in these other areas to view the other reports for more details. Each is available on the AAPOR website at www.aapor.org
 
2.0 MOBILE TECHNOLOGIES AND SURVEY RESEARCH
The widespread availability and rapid adoption of mobile technologies2 (feature phones, smartphones, tablet computers, and a range of other new devices) has broadened both the opportunities and the challenges for collecting opinion, attitude and behavioral information but has also changed the ways in which people acquire information and behave in many instances. Estimates as of mid-2013 show that 91% of the adults in the United States own a cellular telephone, with 61% of adults indicating they own a “smartphone” (Smith, 2013). Additionally, 35% of Americans aged 16 or older owns a tablet computer (like an iPad or Galaxy) and one-in- four say they own an e-reader, such as a Kindle or Nook (Rainie and Smith, 2013). Sixty-three percent of adult cell phone owners report using their phones to go online, with 34% of these cell internet users saying they go online mostly using their cell phone (Duggan and Smith, 2013).

In terms of coverage and use, however, mobile technology adoption is not uniform across the population. Smartphone ownership, for example tends to be highest among those with incomes above $75,000 (78%), college educated (70%), and younger adults (~80% of those aged 18-34 owns a smartphone; Smith, 2013). Additionally, different types of people prefer or purchase different types of phones, with Android models being more popular among younger adults and African-Americans and iPhone and Blackberry being more prevalent among college educated and those with higher incomes (Smith 2013). Because these phone models run on different operating systems, these differences have an impact on how surveys are displayed on a mobile device, the types of features they support, and the ways in which people interact with the phone. In other words, the mobile world is a messy one from a measurement perspective, with differential coverage and usage across the population and various platforms being accessed by these individuals.

The emergence of mobile devices -- with a host of integrated features including voice, photography, video, text, email, GPS, apps, and others -- has opened the door to a new generation of measurement tools for those who study public opinion, attitudes and behaviors as well as other sociological phenomena (Raento, Oulasvirta, and Eagle, 2009). For example, mobile devices have been used for such varied research activities as:
  • collecting in-the-moment surveys, which have the potential to reduce recall bias;
  • capturing complex behaviors, such as consumer expenditures, location and timing;
  • detailing locations and routes used in travel and transportation studies;
  • providing enhanced educational assessment tools for studies of students;
  • recording health journals in a patient’s own voice;
  • mapping environmental hazards and potential exposures; and
  • allowing direct measures of physical activity and health measures.
The use of these devices to collect information for research purposes is, however, still in its nascent stage. Standards or common “best practices” are still in development regarding conducting research using mobile devices (Weber et al., 2008). As detailed in the report below, there is some early work to suggest that the more traditional survey administration over mobile phones, whereby written questions are presented to respondents in a web browser or survey app follows many of the same heuristics that apply to questionnaire design for other survey modes (Tourangeau, Couper, and Conrad, 2004). And yet, there is also evidence that these new mobile- based modes have their own “rules” as well.

One thing that does seem clear is that the use of mobile phones for survey research and related data collection uses is not simply an extension of previous methodologies, but combines elements of traditional CAI systems, online data collection, and additional, new elements. For instance, smartphones and tablets can support the administration of surveys in a number of ways -- online/web surveys, application (or “app”)-based surveys, voice interviews, or interviews via text messaging. All of these modes can operate in ways similar to their traditional phone/PC/laptop-based counterparts, but they have their own challenges on mobile devices, for example with regards to screen size and usability, variability of display or functions across different operating system, as well as the ability to collect data in different modes and formats (such as text, email, visuals, GPS, Bluetooth enabled devices, etc.) within a single survey.
 
Mobile devices vary significantly when viewed as data collection platforms. There is great variability in terms of the form-factor or outer casing/shape, the size of the phone and accompanying screen, resolution, operating system, features supported (voice quality, cameras, video, speed of connectivity), touch screen versus tactile keyboard entry, and type of scripting allowed (e.g., Adobe Flash, JAVA scripting, etc.). Depending on data and voice plans and vendors, as well as the respondent’s physical location, network connectivity can vary in ways that have the potential to affect data collection, far beyond questions about connectivity for landline telephones or desktop computers.

The fact that respondents’ circumstances and environments have the potential to vary so much more than in previous data collection modes adds another layer of complexity. Understanding these similarities and differences with more conventional methodologies is critical for the proper use and leveraging of mobile devices for surveys and related data collection.

2 When we refer here to “mobile technologies” or devices we include “smartphones” and/or “tablets” as well as “feature-phones.” Feature phones generally refer to devices that are lower-end mobile phones compared to higher- end smartphones. The differences are primarily in the operating systems (Apple iOS for iPhones, Google’s Android OS, HP’s webOS, and Microsoft’s Windows Phone were top competitors at the time of this report), processing speed, and capabilities. Tablets, in contrast, tend to be somewhat larger, flatter and operate more as portable computers equipped with touch screens as the primary input device and running modified desktop OS. There are also a host of new devices that fall somewhere between smartphones and tablets in terms of size and capabilities.
 
2.1 Mobile Online Surveys
 
Conducting surveys of the general public via mobile devices is not as easy a proposition as it might first appear. A set of attempts to actively encourage respondents to move from laptop/desktop-based Internet to mobile devices for survey completion have not met with notable success  (Callegaro and Macer 2011; Link, Lai, and Bristol, 2013). For example, Millar and Dillman (2012) offered three test groups of students different modes for completing a survey: traditional web, web-based smartphone, and choice of web or mail. They found that very few students actually elected to take the survey on their smartphones. Likewise, in a survey of the general public conducted with the Gallup Panel, McGeeney and Marlar (2013) found that adding mobile to a mail and web-based survey did not significantly increase response rates nor change the demographic composition of the resulting respondent pool. Their study suggests, therefore, that individuals may not be ready to transition to mobile survey responses in mass, but rather may do so based on their own preferences and situational context.  However there is a growing trend in survey respondents completing, or attempting to complete, online surveys on mobile devices, whether or not the survey administrator wants this to happen (Buskirk, 2013).
 
The current state of knowledge about the dynamics of mobile surveys is less advanced than is needed for a complete theory; different studies have used different populations of respondents and have deployed mobile surveys with different features, so understanding exactly what may be comparable and is generalizable is an evolving picture.  In addition, the fact that mobile adoption and technological experiences are changing so quickly makes it challenging to know how a finding from three years ago would apply to the exact same population today, much less a different population. Nonetheless, we see value in summarizing what has been observed thus far.

Exploring some of the factors related to participation in a mobile survey, Bosnjack, Gottfried, and Graff (2010) found that enjoyment was a key factor motiving mobile survey participation. Exploring other potential factors, Walton, Buskirk and Wells (2013) reported that participation in a mobile survey (deployed via a downloadable app) involves some traditional survey considerations (incentives and survey length) as well as new concerns (the amount of personal information required at registration and the requirement that an app be downloaded in order to take the survey). The research team also found that, in general, respondents had higher preference for computer and tablet surveys compared to smartphone and paper-and-pencil surveys (Buskirk, Walton, and Wells, 2013). This finding may be mediated by a respondent’s survey preferences and prior computer-based survey experience. Conrad and colleagues (2013) conducted an experiment on mode choice comparing use of text messaging versus voice interviewing via a mobile phone, with each mode offered via either a human interviewer or an automated interviewing system (two-by-two design). They found that allowing mode choice on mobile produced less rounding, less straightlining, fewer breakoffs and greater respondent satisfaction with the interviewing process.
 
Break-offs, that is dropping out of a survey before it is completed, seems to be a greater issue for mobile modes than their computer-based counterparts. At a more fundamental definitional level, in textual modes it is less clear than in a voice survey when to consider a non- response a break-off.  Thus far the evidence is that break-offs are more of a concern in mobile surveys compared to surveys taken on a laptop/desktop (Bosnjak, Poggio, and Funke, 2013; McGeeney and Marlar, 2013; Buskirk and Andrus, 2012b). In an experiment comparing Web, mobile-web and tablet users, Wells, Bailey and Link (2012) reported that 5.3% of mobile web respondents dropped-out of the survey, compared to 0.9% for tablet users. Interestingly, break-off rates for tablet users (2.9%) were closer to PC web than mobile. This finding is similar in patter to findings reported by other researchers (Buskirk and Andrus, 2012a; Peterson, 2012; and Guidry, 2012).
 
Conversely, despite the challenges of using mobile as a primary mode for survey administration, a growing percentage of respondents are accessing surveys that were designed for administration on desktop or laptop computers on their mobile devices, so called “unintended mobile respondents” (Peterson, 2012). The percentage can vary considerably across studies. One report found that the proportion of respondents responding via mobile web rose from less than 4% in early 2011 to more than 8% in 2012 (Comer and Sunders, 2012), a finding similar to that reported by others (Bosnjak, Poggio, and Funke, 2013). Another study reported that approximately 23% of respondents who own a smartphone completed a survey in an online experiment via mobile rather than traditional online platforms, despite the researchers’ efforts to instruct and re-direct respondents to a desktop or laptop system (Wells, Bailey & Link 2012a). Other research has also highlighted the steady rise in unintended mobile respondents among online survey panels (Buskirk, 2013). The important lesson here is: if you conduct online surveys, you are already in the “mobile space,” and this could have an impact on the quality of the data collected.

The evidence is that most organizations conducting online surveys are not yet prepared to deal with the challenge of unintended mobile respondents.  A 2012 study of 230 research companies in 36 countries that conduct online surveys found that a majority either had no policy about optimizing the surveys for mobile browsers (30%) or allowed studies to be conducted without modifying the surveys (32%); just 15% indicated that they actually made modifications to the online survey to make it easier for mobile respondents (Macer, 2012). Callegaro (2010) notes that there are four basic ways in which researchers can deal with this issue: (1) make no changes to the online survey and simply deal with the potential increased nonresponse and measurement errors; (2) block mobile respondents from taking the survey (using a splash page to redirect them); (3) optimizing the online survey so it displays correctly on a mobile platform; or (4) creating an online version that is accessible via any device. Each of these approaches has certain benefits and drawbacks.

Making no changes to the online survey is the path of least resistance and is often the default position, as the researchers simply allow the survey to display on mobile devices in whatever manner each individual operating system provides. There are no additional monetary costs and the research will likely pick up at least a few more respondents via this channel than not allowing mobile browser users.  However, most surveys not optimized for mobile require some effort by the respondent simply to view the survey, such as screen pinching, zooming, and scrolling (Buskirk and Andrus, 2012a). This approach requires a persistent Internet connection, which can be difficult in some locations or if the respondent is on the move. Also, many features, such as images, can be displayed disproportionately in terms of size. Difficulties in viewing media content may extend to videos as well (Mendelson, Gibson and Romano Bergstrom, 2013). These issues lead to increased likelihood of measurement error or item non-response as respondents press incorrect buttons, become frustrated and skip questions, or worse, opt out of the survey altogether. Callegaro (2010) showed in several studies that drop-off rates due to this increased burden can vary considerably and be as high as 25-70%.

Blocking or warning mobile respondents by either preventing a mobile browser to access the survey or displaying a screen that tries to direct the user to use a personal computer or laptop to complete the survey is another method of dealing with unintentional mobile respondents. Paradata in the form of “user agent strings” (that is, encoded information about a respondent’s device, such as the type [computer, phone, tablet, etc.] and browser) can be captured by the researcher when a mobile device tries to access an online survey (Callegaro, 2010). This information can be used to route mobile users to a “splash screen” or notification with further instructions for completing the survey in a different manner. Blocking mobile browsers from online surveys is relatively easy and inexpensive to implement. The disadvantages are that it can increase non-response -- markedly. For example, one study found that 80% of panelists who encountered such a re-direct did not complete the survey (Buskirk and Andrus, 2012a). Moreover, because younger adults are more likely to own smartphones and try to access online surveys via these devices, this can lead to differential non-response further exacerbating the general problem among most survey researchers of obtaining response from younger cohorts. McCalin, Crawford, and Dugan (2012) also explored this respondent behavior and reported that nearly 58% of those who did switch when reaching a re-direction splash page did so before entering the actual questionnaire, however, the median time between encountering the page and starting the survey was 17.6 hours indicating that quite a few respondents did not immediately complete the survey but rather did so at a later time.
 
Optimizing online surveys so that they display more favorably on the smaller screens of mobile devices is a more proactive way of dealing with this issue and it is becoming more commonplace. When done correctly, mobile survey optimization can ensure that graphics and images are in their proper proportion; reduce the need for horizontal scrolling; provide more control over the number of questions presented per “page;” allow the survey to utilize some of the native or more familiar features often associated with smartphone; and, optimize the question layout to make it easier for the respondent to read and respond. Several online survey packages (such as SurveyMonkey and QuestionPro) have built-in options for publishing mobile versions of online surveys, making the conversion task a bit easier.  But we note that these types of survey packages often provide a “one size fits all” approach to optimization, which in practice, is actually a “one size fits many” approach as the results will vary when rendered on different mobile operating systems (Buskirk, 2013). Stapelton (2011) reported that use of an optimized mobile survey reduced the rate of breakoffs among survey participants. Optimization does not, however, solve all issues -- such as the need for a persistent Internet connection, up and down scrolling, or the fact that there may still be some variability across operating systems in terms of how a screen is displayed and the functionality of the built-in device features. It also requires additional time and development costs.

Developing a survey-based application that resides on the mobile device itself is another alternative for conducting mobile surveys. This solution relies on an operating system-specific application to push and upload content. Like the mobile optimized version of a web survey, survey app-based solutions provide researchers with greater control over the display of a questionnaire such as orientation, or need for scrolling or zooming as well as more options for communicating with respondents. Images and video content in a questionnaire can be displayed in a more reliable manner. A major benefit is it resides locally on the phone it also does not require a persistent Internet connection for the respondent to complete the survey. The approach also facilitates the use of other smartphone features, such as use of GPS for location and the capture of pictures, videos or audio by the respondent. The development of a robust data collection app for survey use is not, however, an easy undertaking. It often requires multiple versions each geared towards the specific operating systems used by the respondents. This also has the effect of limiting which respondents can utilize the app as researchers are often forced by time and cost considerations to develop apps for either a single operating system or just a small number of operating systems. Because of the distribution of operating systems for in-use mobile devices contains a mix of the latest operating system version along with prior versions that can date back to up to two or three years, app development for survey research purposes often has to use older versions of the mobile operating systems as a “least common denominator” for development (Link and Buskirk, 2012; Buskirk, 2013). However the app is developed, offering it as a survey method requires the respondent to download and install the app -- another step in the survey process that can lead to unit nonresponse (Link, Lai, & Bristol, 2012). As an alternative, Buskirk and Andrus (2012a) developed an “app-like” approach, which utilized a web-based survey instrument but was programmed with the look and feel of an on-phone app. This approach avoids some of the pitfalls of local app solutions, in particular the need for multiple versions as well as the download requirement, but it does require a persistent Internet connection.
 
2.2 Experiments with Mobile Surveys
 
Extensive development and testing of mobile surveys have been underway for the past several years. Much of the initial work focused on determining whether and how classic mode effects, question response option effects, and nonresponse effects observed in other modes replicate on mobile devices with smaller screen size, different information displays, and different respondent navigation and response behaviors. Some findings from studies to date include:
  • Early research suggested that limited screen size and unfamiliarity with mobile data entry could increase measurement error (Fuchs, 2008). The exact point of contact on the screen can vary based on the pressure and size of the finger used, orientation, and technical specifications of the screen interface. Combined with the smaller screen, there would appear to be a greater likelihood that respondents could press one response when in fact they meant to press a neighboring response option.
  • Other research examining low versus high frequency scales found that mobile respondents behave similarly to how they behave with other modes (Peytchev and Hill 2010; Wells, Bailey & Link 2012a). Also, as in other modes, larger text boxes (versus smaller) tend to lead respondents to provide more characters in open-ended responses (Wells, Bailey & Link, 2012a).
  • Response-order effects, for example, the increased likelihood of selecting a response option that is earlier in a list of potential responses, can potentially be heightened on mobile devices due to differences in the visibility of response options (Peytchev and Hill 2010). However, when lists are kept shorter, there is little evidence of primacy effects (Wells, Bailey & Link 2012a). Similarly, some researchers have noted evidence of greater likelihood of non-differentiated (straight-lined) responses (particularly down the left side of a grid) in mobile versus PC versions of a survey (McClain, Crawford, and Dugan, 2012). However, other studies have shown no differences in response-order effects in mobile surveys; for example, using an access panel in Russia, Mavelova (2013) reported finding no stronger primacy effects in mobile web survey mode and that mobile web was associated with a similar level of socially undesirable and non-substantive responses as compared to PC online.
  • Images on a mobile device may be presented disproportionately to screen size; however, this does not seem to have a negative effect on responses provided (Peytchev and Hill, 2010).   Noting this result, Buskirk and Andrus (2012c) conducted an experiment in which a series of app icon images were provided on a single screen to iPhone and Computer respondents.  All images were optimized for display on mobile devices and there were no significant differences noted in completion times across the two modes.
  • Regarding formatting, the evidence thus far shows that the visual design of an online survey on a mobile screen may have quite different effects (on navigation and completion) than it does on a desktop. For example, Stapleton (2011) reported higher response and fewer breakoffs in a mobile survey when more simple radio-buttons were used to record responses rather than drop-down boxes. That same study also found that paging or putting a single question per page did not, however, seem to affect breakoff rates.
  • Different mobile survey designs rely on network connectivity in different ways, which can affect completion time and perceived respondent burden.  If a design requires frequent network contact to send high-bandwidth data (such as new questions, graphics, responses), this will often take more time. If a design minimizes network time, this may speed the interaction and reduce burden yet limit what can be shared with respondents. Adding to the complexity, respondents on mobile devices are likely to be mobile (e.g., not in a stationary location) and often multitasking, which changes the dynamics of their survey participation. For example, mobile surveys are often found to take longer to complete than identical surveys completed via PC (Saunders, Chrzan, and Luck, 2012; McGeeney and Marlar, 2013). However, one study reported the opposite finding with a survey that limited the number of questions to two per screen for mobile and four for desk/laptop browsers (Buskirk and Andrus, 2014). 
In terms of responses to open-ended questions, results appear to have changed over time. In one of the earliest studies (Peytchev and Hill 2010), researchers found respondents were less likely to provide short open-ended responses when given the option not to. They concluded that open- ended responses in mobile-based studies may be more problematic than in other modes, however, they acknowledge that this finding may be an artifact of the study design which provided mobile phones to individuals who may not have been familiar or comfortable with their use. Two other studies conducted several years later, however, found that respondents do not appear to shy away from offering open-ended responses, provided the responses are brief (Wells, Bailey & Link 2012a; Buskirk et al 2011). While these differences in findings may be due to differences in the experimental designs or the text-inputting interfaces in the types of phones tested, it is important to note that the “text messaging revolution” took place during this intervening period as well, with a large proportion of the adult public learning and utilizing text messaging on a regular basis. This “societal learning” could help to explain the change in behavior among respondents over this fairly short (less than 5 year) timeframe. It also highlights another important point: as technology changes, so too do public behaviors. As mobile technology continues to evolve, therefore, we should expect that the behaviors of the general public will also follow suit and this can have a significant impact in how researchers design data collection efforts and the tools they choose to utilize.

In addition to more traditional point-in-time online surveys, researchers have also begun to use mobile devices in place of activity diaries, whereby respondents record a series of activities over a given period of time. Because users typically have their phones with them throughout the day, these devices can be utilized to capture data “in-the-moment,” reflecting both respondent behaviors as well as attitudes (Bailey et al., 2011; Lai et al., 2010; Scherpenzeel, Morren, et al., 2012; Graham and Cobb, 2013; Runyan et al., 2013). Link (2013) reported high compliance across a number of indicators (survey completion, photo requests, respondent mood, activity reporting) in a 4-week repeated-measures study of media and consumer activities during the 2010 World Cup games. Likewise, Scagnelli and colleagues (2012) also reported good compliance with a smartphone-based study in which respondents were asked to log information about snack and other quick-consumables purchases, recording a brief survey, taking a photo of the items purchases, scanning the UPC code, and checking-in (capturing GPS coordinates) at the place of purchase.

The evidence thus far on data quality for “in-the-moment’ vs. retrospective reporting is, however, mixed.  For example, Shea and colleagues (2013) compared response rates and self- reported accuracy across four different conditions -- two capturing “in-the-moment” data via SMS or mobile, and two where reports were made at the end of the day, one via mobile and the other via more traditional web survey. They found participation to be highest for the web end-of- day condition, followed by the SMS in-the-moment group. Accuracy appeared highest, however,
with the two end-of-day groups compared to the in-the-moment conditions. They attributed some of the more negative in-the-moment findings to a combination of user lack of familiarity with this methodology and technical issues. Conversely, Graham and Cobb (2013) found that compliance declined over time among a group using a 7-day “in-the-moment” mobile diary compared to those using a more traditional “previous day” reporting methodology; however, they found that the in-the-moment mobile diary tended to produce greater numbers of reports and, therefore, presumably was less prone to nonresponse or activity omissions. Likewise, Johnson and Shea (2012) noted issues with “in-the-moment” data collection, such as being disruptive to a respondent’s daily patterns and, hence, potentially biasing the participant pool to those who are less busy. They also noted lower reporting levels as, at times, it was not convenient for the respondent to make an entry accurately. They recommended utilizing a hybrid of “in-the-moment” with the capability for retrospective entry -- an approach utilized by Link, Lai and Bristol (2013) which resulted in greater compliance than standard PC web entry.

It is entirely possible that the quality of in-the-moment vs. retrospective reporting will vary across different content domains and participant populations, and that there will be complex interactions with participant motivations and incentives. We see this area as ripe for further exploration.
 
2.3. SMS and MMS Technologies
 
Use of mobile phones for data collection also allows researchers to make use of texting features. Short Message Service (or SMS) utilizes standardized protocols to allow exchange of short messages to and from mobile devices. This is the most widely used mobile data application in the world and can be done using either a feature phone or a smartphone. Multimedia Messaging Server (or MMS) allows users to send and receive messages with multimedia content (images, videos, audio). With this service, both the sender and the receiver must have MMS capability. Web interfaces and email can now often be used to send messages to SMS- and MMS-capable devices, providing a greater range of ways for researchers and respondents to communicate and interact via mobile devices.

Researchers have begun to use SMS and MMS in different ways, including:
 
  • Reminding respondents to reply to a mail survey. In one early study, the technique had marginal impact improving response (Virtanen, Sirkia, & Wurmele 2005), but the use of text messaging has changed substantially in intervening years;
  • Contacting panelists whose cell phones were turned off or did not have voice mail activated and demonstrating some success in getting respondents to call back (Callegaro 2002);
  • Delivering short self-administered surveys (e.g., Down & Drake 2003; Schober et al., 2012) -- for instance, many new firms for social and market research are relying heavily on text messaging surveys for large-scale data collection;
  • Informing researchers about the working status of a mobile phone to improve efficiency of contacting (Steeh, Buskirk, & Callegaro 2007; Buskirk, Callegaro and Rao, 2010).
In one of the more comprehensive comparisons of text versus voice interviewing (utilizing both automated and human versions of each), Schober and colleagues (2012) reported that while text surveys may take longer to administer, they can lead to higher completion rates, greater respondent satisfaction, and improved data quality (fewer rounded answers, less straightlining, and more disclosure of socially undesirable behaviors).  This may be because text messaging interviews allow respondents to answer when it is most convenient for them, with less time pressure to respond immediately (and thus potentially more time to answer thoughtfully)
and because there can be reduced social presence from interviewers in text than in voice.
 
Researchers have also used SMS as a way of collecting “in-the-moment” information, with the hopes of reducing recall bias. In one such study researchers used SMS to collect data on physical activity and exercise for an activity diary over a 5-day period (Brenner & DeLamater 2012). When compared to administrative records for facilities use, the SMS data were found to be a valid and reliable way of collecting such information. Similarly, Andrews, Bennett & Drennan (2011) used a combined web and SMS survey to collect repeated measures of consumers’ emotional experiences using mobile phones in everyday life. They found the approach to be a valid and reliable means of collecting consumer insights that would have been difficult or impossible to capture with standard recall surveys. In another study, Kuntsche & Robert (2009) found that SMS questions could be used to assess alcoholic drink consumption over time more effectively than a morning-after Internet-based recall study and that sending the text question -- even during known drinking times -- did not change the reports received. Anhøj and Møldrup (2004) used SMS as a means for generating diary-type information from asthma suffers, reporting that the approach produced a better response rate than relying on a more traditional Internet-based mode. In short, the use of the technology as a means of collecting short “in-the-moment” reports shows promise -- if used in the proper context with the right types of respondents.
 
2.4 Mobile Survey Design Considerations
 
Unlike mode studies, published works or conference presentations on more “nuts and bolts” aspects of using mobile for data collection are less prevalent. Additionally, because technology in this area is quickly changing and the public is continually evolving their behaviors with each wave of new mobile technology it is somewhat difficult to provide a list of “best practices” that will likely stand the test of time. Instead, we offer suggestions based on lessons learned from both practical experience (of the report authors and others) as well as studies deployed on current technology platforms. Readers need to be aware of this limitation and interpret these as guidance, rather than hard-and-fast rules.

(1) Match the Tools and Task to the Respondents: Given that people differ in the types of technologies they adopt and their familiarity and ability to use these technologies, researchers need to carefully consider which technologies to deploy with their population of interest (Link and Buskirk, 2012). Some studies have shown, for example, that persons aged 50 and older, particularly those with little or no regular experience using a smartphone, have greater difficulty utilizing certain types of mobile data collection apps (Link, Lai and Bristol, 2013).
 
Moreover, in the mobile world, “difficulty” is potentially a more nuanced concept than simple “burden defined as survey length.” It can include disruptiveness and the burden of being able to provide a response based on physical location or situational context; in other words, it is also about how participating in the survey fits with the respondent’s habits and other activities most conveniently or least disruptively. In making a decision on the best technology or mode to use, researchers are encouraged to consider (a) the data that need to be captured, (b) fit with the target population’s skills and ability to respond via this mode, and (c) the best way to optimize presentation of the survey content.
 
(2) Follow Guidelines Established for Contacting Cell Phones: When considering the collection of survey data via a mobile device, many of the concerns are the same as those related to data collection via other modes, particularly telephone interviews conducted via cell phones. These include, but are not limited to: ensuring that the respondent is in a safe location (e.g., not driving), that they are able to speak (or utilize the data entry features of mobile data collection) privately (for confidentiality reasons), and that the respondent is in the area and time zone expected at the time of sampling. The principle is simple: researchers should not encourage respondents to complete surveys or other data collection tasks if respondents are in an unsafe location or engaged in activities where responding to a survey may put them at risk of harm. (For more information on contacting respondents via cell phones, please see the AAPOR Cell Phone Task Force Report, available at www.aapor.org).
 
(3) Mobile is a Platform Supporting Multiple Modes: Although referred to as a “phone,” smartphones contain a number of features that can be utilized for survey data collection. These include voice to support more traditional CATI interviewing (it is a phone!); SMS texting for survey invitations, communication with respondents (e.g., to set up an in-person interview), and short surveys; online surveys accessed via a mobile browsers; and app-based surveys (Link and Buskirk, 2012). As noted in subsequent sections of this report, smartphones also support an array of other data collection techniques, such as GPS/location, barcode/QR code scanning, visual data capture, and Bluetooth device communication. Researchers need to educate themselves about both the technological aspects of these features, as well as the benefits, challenges, and potential errors in using these various features for collecting respondent data. 
 
(4) If You Are Conducting Online Surveys, You Are Conducting Mobile Surveys: As noted in a previous section, a non-ignorable and growing percentage of respondents are now accessing online surveys via their mobile browsers (Morgan Stanley, 2010 and Return Path, 2013). The impact of this rise in mobile online survey respondents is clear: abandonment rates among online respondents who access a non-mobile optimal survey via their mobile device are high – and in two recent independent studies, as high as 25% (Buskirk, 2013). Mobile optimization of surveys is certainly recommended and becoming easier to accomplish using a number of off-the-shelf online software products which provide publishing options for mobile browsers. The collection of paradata -- in particular user agent strings -- allows researchers to know the type of platform and browser being used by the respondent and to direct her or him to the most appropriate version of the survey. Optimization should decrease the likelihood of both breakoffs (non-response) and measurement error.
 
(5) Form/Format of Survey Invitations - Brief and Focused: Emails and SMS texts are often used to provide invitations to or information about mobile browser surveys. To make these effective, researchers needs to keep the messaging simple and straightforward (Link and Buskirk, 2012). Place the key information (e.g., survey sponsor, incentive, etc.) at the beginning of the email subject. Place the survey link as soon as you can in email communications to avoid scrolling on the mobile phone. Minimize the length of the web address to the survey and the number of special characters to make it easier for a respondent to type it into different web browsers if they choose not to or cannot click the mobile link directly to the survey (Buskirk and Andrus, 2012a).  Also note that as more emails are opened on mobile devices relative to desktops and laptops, the form of the email invitation may also need to be optimized for viewing on mobile devices.  For example, coding a start button with an embedded link may work on regular email, but the rendering might require pinching and zooming to properly access the start button within emails opened on mobile devices (Buskirk, 2013).
 
(6) Survey Length/Layout - Short and Simple: The issue of survey length (amount of time and potential burden it takes a respondent to complete a survey) is one that cuts across survey modes. While there is no clear “cut off” when it comes to length, the adage “shorter is better” applies to mobile surveys for a number of reasons. First, screen size and keyboard/touchpad considerations make it more difficult to complete a survey (at least comfortably and without error) than do larger devices like personal computers, laptops and even tablets. Second, respondents are used to making regular, but brief uses of their smartphones (e.g., texting, looking up directions, scrolling through apps) and so shorter surveys fit more naturally with the way in which the devices are normally utilized. Third, when using browser-based surveys, connectivity can be an issue when respondents who are actively mobile at the time of the survey encounter “dead spots.”  Loss of connectivity can lead to loss of data and potentially loss of respondent interest or follow-though in completing the interview. Keeping mobile surveys brief increases the likelihood of capturing better and complete data from a respondent (Okazaki, 2007; Tarkus, 2009).

Optimizing an online survey for completion in a mobile browser is something of an art, but there are several notable conventions that can make it easier for respondents to complete. The overall layout should minimize the need for scrolling (either horizontally or vertically) to the extent possible. The number of questions per screen should generally be two or less to minimize scrolling. Survey layout should also minimize the need for pinching or zooming (some devices do not support zooming at all).
 
(7) Question Formats - More Limited Than Other Modes: Given the small screen size of mobile phones, researchers cannot easily utilize many of the question formatting styles often seen in computer-based online surveys, such as grid questions or long lists of response options. On a mobile device, sets of questions asked online as a grid may need to be reformatted as a series of single question or perhaps two questions per screen. McClain and Crawford (2013) found that breaking grid sets of questions into individual questions on a mobile device appears to increase the likelihood of respondents reporting sensitive behaviors, thereby representing a quality improvement. Response options should be organized vertically to avoid respondent horizontal scrolling. Allow for adequate space between response options, particularly when touchscreens (iOS, Android, etc.) can be used, because options listed too close together may make it physically difficult for respondent to select the proper option. Ensure that response options are clearly visible (i.e., all on the initial screen) or, when longer lists are necessary make it clear that the respondent needs to scroll down for additional options. Additionally, note that some of the more sophisticated ways of recording a response in traditional online surveys (such as slidebars, drag and drop approaches, or drop-down boxes) may be much more difficult to utilize by respondents on a mobile device and should be avoided. Finally, for open-ended questions, take conventions from texting and provide response boxes of approximately 140 characters.  Also keep in mind that voice-to-text options are becoming more widely available on mobile devices and this option may facilitate open ended answer entry for respondents when the expected responses might be sentences rather than words.  Providing instructions for using this option might be a helpful guide for respondents.
 
(8) Direction for Screen Navigation Differs on Mobile: Limited screen size on mobile devices requires compromise and judgment when using logos, progress bars, disclaimers, and help links in mobile surveys. Tools, such as progress bars, should be used sparingly and only if deemed to have a substantial impact on improving respondent cooperation throughout the survey. When used, researchers should consider abbreviated or more limited versions of such bars. Placement of next and back buttons can cause confusion as the conventions and placement varies in commercial apps used on different mobile operating systems. Typically, these buttons should be placed in accordance to the conventions of the operating system used on the phone.
 
(9) Use of Multimedia Requires Careful Consideration: Smartphones support the use of a number of forms of media -- audio, video, pictures -- which can be used for recruitment purposes, respondent training or components of the survey itself as well as data elements that could be captured by respondents as part of the survey task (for instance, taking a picture of a place or object in conjunction with a series of questions about that place or object). When using graphic images, researchers should consider the appropriate proportion for the expected screen resolutions of mobile devices. A major problem with online surveys that are not optimized for display on mobile devices is that pictures can be distorted or displayed disproportionately on the screen. Video has some additional challenges. For instance, Flash content cannot currently be accessed on devices using the iOS operating system (iPhone, iPads). For all mobile phones, it is often easier (and less error-prone) to embed a link to a video on YouTube or another similar external video site, because mobile-resident video players like QuickTime and Windows Media Player can run inconsistently or more slowly that desired for data collection. Additionally, when uploading or having the respondent send pictures or videos, be aware that this has implications for both speed (related to bandwidth and connectivity) as well as cost to the respondent’s data plan (if using the respondent’s own device).
 
(10) Pretesting is Essential! -- As with any survey or data collection tool, pretesting of the instruments used is a must. This includes everything from the user interface, to the functionality of the instrument or app, to the quality and completeness of the data exported for analysis -- and testing these elements across various operating systems and smartphone models. Although there is a great deal of functionality that is consistent across smartphones, there is enough inconsistency to require extensive pre-testing across multiple platforms. Smartphone emulators, such as “Device Anywhere,” can help with this testing, particularly when comparing a survey across multiple mobile platforms --many of these are available online for use.
 
3.0. BEYOND SURVEYS - OTHER POTENTIAL DATA COLLECTION FEATURES
While much of the research to date on the use of mobile technologies for data collection has focused on administering surveys via mobile devices, there are a wide array of applications and features available on these devices which can augment and in some cases even replace survey data. In many respects, smartphones and tablets can be considered “multimode” platforms because they facilitate more than one form of data collection. In this section we examine five key technologies that are currently in use by researchers to extend or replace certain aspects of survey data collection: Location/Geopositioning, Barcodes/QR Codes, Visual Media, Bluetooth-Enabled Devices, and Data Collection Applications.3

3Although there are a number of other mobile applications and features which are or could be deployed for data collection, many of these have not yet reached a level of maturation (too few empirical studies, niche use, or little current impact within the field of survey research) to warrant inclusion in this report. Researchers are encouraged, however, to continuously monitor the changing smartphone landscape for new features that could further enhance data collection efforts.
 
3.1 Location / Geopositioning (GPS)
 
Information about the physical location of a smartphone presumably also pertains to the respondent. Location information is typically marked as longitude and latitude and in some cases elevation as well. These data may indicate a fixed point or a series of location points, and hence, the locations visited by a respondent, the route they took between locations, as well as their speed of travel. The GPS is probably the technology most commonly used by researchers to identify a respondent’s location via a mobile device and track his/her movements or travel.4 First introduced in cellphones in the late 1990s, GPS uses a series of satellites that send location and timing data directly to the phone. If the phone can pick up signals from three satellites, it can display location on a two-dimensional map. With four satellite signals, GPS can also be used to show elevation. This multiple-satellite functionality leads to some limitations. For instance, accurate identification of a person’s location can fail if the individual is indoors, in geography with hilly terrain or valleys, or in "urban canyons" where tall buildings can prevent or interfere
with the direct capture of satellite signals. For these and other reasons, coordinates can vary from the person’s true location by several hundred to several thousand feet (Boals and Kilger, 2013; Yin et al, 2014). Whether this is a concern for researchers depends on the level of accuracy required for a specific research project.

Location has long been of interest to researchers who seek to understand consumer shopping behavior, transportation and driving patterns, public safety, environmental exposures, as well as health and physical activity. The data captured may include specific places, routes taken from one location to another, distances travelled, and timing of travel Lotan, Musicant, and Grimberg, 2014). Before GPS, such studies had typically been conducted using self-reports by respondents via a series of recall questions (for example, “What route do you typically take from home to work?”) or via an activity, time-use, or transportation diary (Carrion et al, 2014; Maruyama, Mizokami, and Hato, 2014). Responses to self-reported recall studies are subject to a number of potential biases, including faulty memory, refusal to provide an accurate response, survey context and format, and social desirability effects (e.g., McClendon, M.J., and D.J. O’Brien, 1988; Tourangeau, R., L.J. Rips, and K. Rasinski 2000).
 
Diaries, whereby individuals are asked to record specific attributes of their activities (such as starting location, ending location, key landmarks passed during the journey, timing and reason for the trip), can reduce the problem of recall assuming the respondent enters the information concurrently or very soon after the event being captured. However, diaries can also be quite burdensome, particularly if respondents are asked to record information over an extended period of time. This can lead to under-reporting of activities that are not part of a normal routine (such as taking an alternative route due to a temporary detour or traffic pattern) or over-reporting of more routine activities (for instance, a respondent who does not fill out the diary in a timely manner may mistakenly add a trip to the grocery store because he/she typically goes to that store on a regular basis but during the data collection period they did not).

Electronic location capture, via GPS on a mobile device can provide more complete, accurate and timely data than self-reported methods. In its simplest form, survey participants can generate location data simply by walking or driving around an area with a GPS-enabled mobile device (Jones, Drury, and McBeath, 2011; Ythier, Walker and Bierlaine, 2013). Some of the earliest works in this area focused on transport studies, seeking to use GPS-enabled devices to replace or supplement conventional activity diaries (Wolf et al., 2001). This information can then be linked to responses to a survey taken either concurrently with the GPS coordinate capture or afterwards to provide contextual covariates. Researchers have used GPS-enabled devices and surveys to gather information and location details across a range of issues and subjects including health, consumer, and crime studies (e.g., Dwolatzky et al., 2006; Byass et al., 2008; Jones, Drury and McBeath, 2011; Abdulazim et al., 2013).
 
GPS is also being tested with field staff as part of quality control efforts to ensure interviewers are going to the correct sampled addresses and taking optimal travel routes, with varying degrees of success (Olson and Wagner, 2013). Initial studies found that interviewers do not call on sampled housing units in the order that they appear in the assigned segment. Rather, they tend to travel throughout the segment, skipping over sampled housing units, and retracing their path during a single visit to the sampled segment. The researchers also found GPS data to be difficult to interpret at times due to the lack of precision of the measures.

Respondents’ willingness to provide access to geolocation information may differ, depending on the point in time as well as the culture (Cottrill, 2014).  Participants in the 2007- 2008 French National Travel Survey were asked about their willingness to accept a GPS device to monitor their travel. Just under 30% said yes without condition and an additional 5% said they would be willing as long as they could turn off the GPS when they chose to. A few years later, a survey conducted in the Czech Republic found that only 8% of those asked said they would be willing to participate in a travel survey using a GPS device (Biler, Senk and Winklerova, 2013). Meanwhile, popular culture’s experience with providing location information to friends and family is increasing. Proliferation of GPS and location-related data that are now a fixture with smartphone may be changing public attitudes on this topic. Thus, this is a topic that should continue to be monitored and researchers must ensure adequate protection of any location information captured from a respondent as part of a study.
  
4 Cell phone carriers can also triangulate location using a person’s proximity to a series of cellular towers. This approach is not focused on here as this form of location collection is almost exclusively proprietary and the data are typically difficult to obtain by researchers outside of the cell phone company. GPS, in contrast, provides information readily available to researchers.
 
3.2 Scanning / QR & Barcode Readers:
 
Quick Response (QR) Codes and barcodes are optically machine-readable representations of information which can be scanned and decoded using a smartphone app (Mendelson, Lackey and Turner, 2012). They are typically represented as lines of varying widths (barcodes) or as a two-dimensional matrix (QR Codes), and allow for the organization and efficient transmission of large amounts of data (Shin, Jung and Chang 2012). Barcodes and related technologies have been available and utilized by survey researchers for quite some time, initially for inventory and logistical record-keeping (such as paper form check-in) but more recently for use in respondent recruitment and even measurement systems. These codes can be used for data collection activities, such as collecting information on consumer goods or other items containing a barcode or directing respondents to a URL or website for additional study information, study registration, or even an online survey.

In some instances, barcodes can be used to allow respondents to scan in responses to survey items, such as for entering information about produce or other types of traditionally non- barcoded items in studies of nutrition, health or household economics by scanning a code sheet with barcodes for these items (Schon et al 2012; Scagnelli 2012). However, survey researchers are finding a number of uses for barcodes and readers, beyond the familiar use in supermarkets and retail stores, for tracking information from returned mail questionnaires and directing respondents to specific URLs or websites (for example, for study information or online surveys).

Although all mobile operating systems are capable of reading barcodes or QR codes through native software or free downloadable apps, use of these technologies is not universal. Mendelson and Romano Bergstron (2013) found that older adults were only 13% as likely as younger adults to have used QR codes. Thus, barcodes and QR codes, like other new technologies, show a differential use pattern of which researchers need to be aware. Additionally, Gluck (2012) tested the utility of including a QR code on mailed materials sent to respondents as a means of directing them to an online survey and found that few respondents took the time to scan the code and complete the survey in this manner.
 
3.3 Visual Data Capture
 
Capturing photos or video to accompany and enhance survey findings is not a new concept; however, the digital technology revolution has made the capture of visuals far easier, quicker and less expensive. Because mobile phones are constantly carried, camera phones allow respondents to capture visual data at any time. Additionally, the rapid growth of social media platforms such as Instagram, Facebook and Twitter, as well as newer platforms such as Snapchat and Kik, have led to dramatic changes in societal behavior, and users are increasingly more comfortable capturing, editing and sharing photos and videos nearly instantaneously using a variety of photo apps. Users of these systems are increasingly communicating via photos that they enhance with their own captions. While there are likely many reasons for this phenomenon, users are probably particularly drawn to the (1) speed (it is often easier to take and post a photograph than to describe an event or activity in text) and (2) efficiency in relaying complex information (following the adage that “a picture is worth a thousand words”).

The rapid increase in people’s adoption of photo sharing technology is impressive. An October 2013 survey by the Pew Research Center found that 54% of adult internet users in the US post original photos or videos online that they have taken themselves (Duggan, 2013). There are clear age distinctions in these behaviors, with 79% of younger adults aged 18 to 29 years reporting they post photos they have taken, compared to 56% of those aged 30 to 49, 37% aged 50 to 64, and 19% aged 65 and older. Younger adults are also much more likely to use mobile apps for social network sites that specialize in the presentation of visuals. Among all cell phone owners, 18% indicated they use Instagram, compared to 43% among those aged 18 to 29 years. Approximately one-in-four Instagram users indicate that they post photos and/or videos multiple times per day. These trends are similar with Snapchat, which does not permanently archive photos for users, but rather automatically deletes messages soon after they are received, which allows users greater percieved privacy. Just under 10% of all cell phone owners use Snapchat, compared to more than one-in-four of those aged 18 to 29 years.

Collecting visual data in conjunction with survey information can serve several useful functions: (a) adding context to survey data; (b) providing information or referents that can be coded; (c) providing a potential means of improving respondent engagement with a survey; and (d) as a training tool for respondents participating in long-term or more complex data collection efforts.
 
Adding context to survey data. What has been called “participatory photography” is increasingly common in many areas of social science for gathering qualitative or contextual information to supplement other data collection, such as surveys or in-depth interviews (e.g., Wang, Burris, and Ping 1996; Gotschi, Delve, and Freyer 2009). In the simplest uses, it can help to bring survey data “to life,” providing visual references of activities, places or issues asked about in a survey. For instance, a study of the 2010 World Cup Tournament in South Africa asked participants to take pictures of what they were focused on at the time they completed a survey (Link, 2013). The photos provided a qualitative context for the empirical survey data, showing whether respondents who said they were currently “watching the games” were doing so with friends at home, in a crowded bar, or alone in an apartment. As another example, Jones, Drury, and McBeath (2011) report using a combination of participatory photography and a simple
mobile survey to have respondents capture positive and negative aspects of the neighborhood in which they live. Respondents themselves took photos of various features of the neighborhood and they were then asked to rate each feature on a positive-negative scale.

Providing additional information. Photos and videos can also be used to allow respondents to provide details of locations, activities or phenomena that cannot be easily or adequately described by respondents with an open-ended question or series of closed-ended questions. For example, if respondents provide photos of potential environmental hazards this can allow researchers to in essence view what respondents were seeing, so that researchers can better understand the responses to code aspects of the photos and analyze these coded data along with the survey results. Visual data used in this way, therefore, become a way of extending the information being collected and analyzed as part of the survey. This can include “geotagging,” the technique of associating (or “tagging”) media (such as photos, video or audio) with a specific location denoting where the picture was taken (in coordinates) thus allowing the photo to be mapped. Since real-time geotagging provides the real-time location of the person operating the device, it is possible to track where that person has been using the data that they publish.

Improving respondent engagement. Asking respondents to take photos as a part of the survey process can also help to engage respondents with the data collection process and potentially to persist in completing the study. Debrief interviews from one study in which respondents were asked to take photos of what they were focused on at the time of the survey request (which occurred up to five times per day) over several weeks indicated that taking the photos was one of the most enjoyable aspects of the study and was a factor in respondents’ continued participation (Lai et al., 2010). Likewise the South Africa World Cup study found a high level of compliance with requests to take photos and provide brief captions over the 35 days of study (Link, 2013).

Training respondents remotely. Photo and video capabilities on mobile devices also have another potential use: training of respondents. Respondents can now click on and play videos that walk them through the data collection process they are being asked to participate in. This can save the time and money of having interviewers contact and train respondents in how to complete study tasks, particularly if the tasks involve more than simply completing a point-in- time survey. It also allows respondents to view (or re-view) the training at a time and place of their choosing. 

Whatever the task, keep in mind that photo image size (in pixels) varies widely across smartphone model.  The size of the photo impacts the uploading time and amount of data consumed on the user’s data plan.  So you should attempt to determine the smallest level of quality required for photo processing and provide the user options to reduce the size of their images before uploading if possible.  This will ensure that you reduce the burden to the extent possible on data plan consumption.
  
3.4 Bluetooth-Enabled Devices and Related Technologies:
 
Smartphones can extend or augment survey data collection in their ability to wirelessly connect to an array of external devices, transmitting data of various types back and forth. The most common and extensively used technology standard for this kind of high-speed smartphone-to- external-device data transmission is Bluetooth. Smartphones are increasingly used in health and medical studies as a conduit for receiving biohealth information from portable medical devices (e.g., blood pressure, glucose and pulse oximeter monitors, weight scales) and mobile sensors (e.g., physical activity; accelerometer counts, heart rate, respiration rate, pulse pressure via chest or armbands, wireless electrodes; Gregoski et al., 2012). Once the mobile device receives the pertinent information, it is microprocessed, encrypted, and the data packets are transferred to some form of localized or web-based server for processing. The process allows study participants to easily self-monitor various health parameters and provides information to researchers.

Sensing technologies integrated into smartphones have also been used to reduce measurement error in studies of health and environmental exposure. Furberg and colleagues (2007) detailed the technology and logistics required to conduct in-home monitoring of environmental exposures, data that were later combined with survey responses. De Nazell and colleagues (2013) obtained information on physical activity and geographic location that they linked to space-time air pollution data. They found that this approach could substantially alter exposure estimates; for instance, on average travel activities accounted for 6% of people's time and 24% of their daily inhaled NO2. They concluded that due to the growing number of users, this technology could potentially provide an unobtrusive means of enhancing epidemiologic exposure data and similar information at low cost.

Use of smartphones in relaying data from Bluetooth-enabled monitoring devices has several advantages over desktop/laptop computers, including higher population penetration, increased privacy, lower cost to purchase, easier ability to transport, and overall increased personal convenience of use.  Nonetheless, there are some potential drawbacks and challenges. Some monitoring devices can be expensive and maintaining functionality can bring additional costs. There is the potential of unexpected loss of wireless sensor connection, increased power depletion from the smartphone for using a connected device, and a burden of responsibility for keeping track of the external device (Boulos et al., 2012). Further, some devices require the user to wear multiple sensors, which can be uncomfortable or impede movement.
 
3.5 Mobile Applications as Infrastructure for Multimodal Data Collection
 
Mobile software applications, or “apps,” are now an integral component of every smartphone. Like many other mobile-related technologies, app usage is typically higher among younger adults than it is older ones (Purcell, 2010). In contrast to using a browser on a mobile device (the “mobile web”), apps can typically take greater advantage of the native capabilities of a smartphone, such as camera, microphone, GPS, scanner, etc., and pull them together into a single interface. In essence, mobile apps can serve as the interface and skeletal infrastructure for a multimode data collection device.

The use of multimode data collection apps is not simply the next stage in evolution of CAI, but rather a species unto itself, with elements of CAI but also a new set of user expectations. As Link, Lai and Vanno (2012) note, being involved with a typical CAI survey is a rare event for most respondents; however, much of the population now has considerable experience with mobile apps. This experience can be expected to shape their expectations for how an app should operate, including ease of use, intuitive interface, speed, usefulness or utility (including whether it is entertaining or otherwise engaging), and use of native smartphone features.  Respondents assert that researchers need to be aware of these societal trends and expectations and develop data collection tools that are in line with (or not too far out of line from) these emerging norms.

In addition to simple survey apps, a number of studies are now utilizing smartphone apps to facilitate multimode data collection. As one example of the extent of multimodality that a smartphone can allow, a Nielsen data collection app for recording television viewing data included a registration survey, a two-week activity diary to capture all instances of television viewing, an in-app tutorial, and in-the-moment “trigger” surveys (that is, short surveys that could be administered based on a trigger or event – e.g., time of day, watching a particular type of show, or personal characteristics of the respondent.; Lai, Link and Vanno, 2012; Lai, Link, and Bristol, 2012; Vanno, Lai and Link, 2012; Link, Lai and Bristol 2013; and Bristol, Lai and Link, 2013). The app also included different “gamification” elements in an attempt to further engage respondents in the data collection process. These elements included the use of points and levels, allowing respondents to connect and push content to social media platforms (Facebook and Twitter), and virtual badges, which proved somewhat effective with particular demographic groups (younger adult Asians and Hispanics). Similarly, an app to help capture “in-the-moment” snack purchases at quick-marts used a range of smartphone features including short surveys, barcode scanning, GPS tracking, and taking pictures of goods purchased (Scagnelli et al., 2012).

While data collection apps can include many attractive features, they can be time consuming and expensive to develop and test. The advantages of using multiple multimodal features of a smartphone for a given data collection effort may not be cost-effective for one-time (point-in-time) surveys.

4.0     PRIVACY CONSIDERATIONS
 
Researchers cannot delve into the world of mobile and other new technologies without an understanding of some of the ethical and legal implications these technologies may have for both respondents and the researchers themselves. Headlines are increasingly filled with reports of unintentional (and sometimes intentional) privacy violations, systems that reveal respondents’ personally identifying information, or data collection efforts that cross the boundaries of collective public comfort. When asked, the public does tend to express considerable concern about the privacy of their personal information (Boyles, 2013; Rainie, 2013). And yet, other headlines reflect increasing interest in and adoption of services that collect and preserve personal data, such as “Three-quarters of Smartphone Owners Use Location-Based Services” (Zickuhr, 2012) and “Mobile Payments May Replace Cash, Credit-Cards by 2020!” (Kelly, 2012). This leaves researchers facing something of a “Privacy Paradox;” that is, people may indeed be concerned about data privacy and yet they continue to engage in behaviors with their mobile devices that put their data at risk (Link, 2012).

While public behaviors and attitudes about data privacy may be complex and at times contradictory, the path for researchers is clear: we need to ensure the protection of our respondents’ private data through every phase of our studies -- and beyond. Protecting privacy is not a simple, one-time process, but a complex, on-going endeavor. Even when a study is completed, there may be data or residual information that requires protecting. The clear path for researchers is to design and implement studies that continually protect respondent information and to be aware of the evolving norms and concerns in the geographic areas and among the populations they are studying. European norms on data privacy and concerns about surveillance, for example, differ substantially from US norms, and they can vary from country to country.

The difficulty in regulating privacy today is the speed of technological innovation, which makes large volumes of information easier to access, but also makes understanding how data is being shared and how to control it more difficult for members of the population. It is the researchers’ responsibility to protect the information collected from respondents and to inform respondents about the potential risks of the data collection effort and uses of the data.  At times this begins with researchers’ developing a better understanding of the complexities of the data collection task and the risks associated with the technologies used to complete those tasks.

Another set of ethical considerations has to do with data storage and how respondents are to be informed about the privacy, security, and access to the responses they provide. Many respondents are likely unaware that the responses they provide may be differently accessible to third parties (mobile providers, law enforcement, hackers) depending on the mode of responding, although general awareness of security concerns is probably rising.  Data sent via a mobile web browser will be differently accessible than voice data, which will be differently available than text messages, depending on the security features of the mobile providers involved. Responses on a mobile web survey can be visible to an onlooker who can see a screen, as might an entire text interview thread (questions and responses) depending on how a messaging app is configured.  Mobile voice interactions also have the potential to be hacked or wire-tapped, although that seems rarer. Researchers need to consider when respondents should be informed about the security risks to data and any potential that their identities might be recoverable, even if the risks are no greater than at any other time they are using the mobile web or texting, and
how to weigh the desirability of fully informing respondents of risks against the potential effects on participation and coverage.

Researchers also need to consider whether a mobile interview (web, text, voice) puts their respondents at greater environmental risk than a desktop or landline interview; for example, interacting with a smartphone screen via web app or text while walking or driving has well documented dangers.  Even if a “safe-to-talk” or “safe-to-text” query protects researchers from legal responsibility for any harm to a respondent, researchers should consider whether the population they are sampling may have any special behavioral propensities that put them at unusually higher risk.

Use of smartphones for data collection raises other issues when researchers utilize some of the additional functionality of the mobile devices, for example, the use of GPS to identify a respondents location and track their movements. Location is a form of personally identifying information because it is a part of the respondent’s physical context. When combined with even a small amount of additional data, a number of other attributes, including ultimately, identity, can be inferred. Potentially identifying characteristics include, but are not limited to:
  
  • Home, work address, personal itineraries
  • Activities, habits, emotions and psychology
  • Co-location – presence of other people / inferred social context
Likewise, the collection of visual data via mobile devices raises some new challenges for researchers. First is the risk of exposure. There is a chance that if a respondent takes a photo that includes an address, street name or some other information (for instance, if certain types of administrative records are photographed), the identity and location of the respondent could be determined. Second, there is a chance that others, who have not given consent for the study, are captured in the photo -- either directly or in the background – and that they could also be identified. Third, there are privacy laws that govern the taking and distribution of photographs in many areas in the US and these laws can vary widely from state to state. The legal issues can be complex because while most localities tend to allow what can be seen from public view to be photographed, what counts as a “public place” can be ambiguous, potentially placing the respondents, and possibly the researchers, at legal risk.

In the use of these devices, researchers also need to make sure they comply with national and local privacy legislation and requirements, which often include issues of respondent notification, consent, data security and information access (short and long-term). Internationally, there are also rules governing the transmission of data across borders. This can be a very daunting task for any researcher, so collaborating with or following other organizations that specialize in this area is good advice -- such as, Center for Democracy and Technology, International Association of Privacy Professionals, Marketing Research Association, and Future of Privacy Forum.  In the absence of clear legal direction, researchers need to self-regulate, adapting survey screeners and research documentation to accommodate the portability and flexibility of the devices we wish to research so as to not erode the protection of human subjects.

Researchers also need to be prepared to encounter children in a way they might not have previously (i.e., in way that it is difficult to identify them). Younger and younger children these days have access to or are “owners” of mobile devices. Depending on the mode of recruitment -- cold calling cell phone numbers, emailing or texting -- researchers will inevitably reach persons’ under age 18. Researchers need to have rules and standard operating procedures for how to identify the underage person and proceed in such instances -- with the default being to exit from the data collection immediately.
 
In addition to concerns about privacy, there are laws and regulations that researchers need to consider as well. In particular, with regards to the use of texting as an initial contacting mode, the AAPOR Cell Phone Task Force reports that:
 
The TCPA restrictions on using an automatic telephone dialing system to call a cell phone could apply to the sending of text messages as well as regular telephone calls. However, several appeals court cases have recently left the TCPA‘s application unclear. In addition, researchers that send text messages to cell phones in compliance with the TCPA (either manually or with expressed prior consent) could find their messages subject to the CAN-SPAM Act (16 CFR Part 316), which regulates commercial e-mail (spam).
 
Even though legitimate survey and opinion research is not defined by the TCPA as being commercial in nature, researchers are encouraged to always include opt-out notices and capability in text messages, as would be required under the CAN-SPAM Act. There also are numerous state laws regulating bulk e-mail and spam, and unsolicited telephone calling, of which researchers should be aware. (pg.75)
 
We recommend careful attention to relevant laws and regulations surrounding text messaging as well as those governing other smartphone features before any research is implemented in this area.
 
In the end, as with any study, researchers should follow the philosophy of “Do No Harm,” developing study designs, protocols, and technologies that ensure respondents will not be harmed in any way or adversely affected as a result of participating (knowingly or unknowingly) in the research. Our research (and business) depends on the good will and trust of the public -- it is every researcher’s obligation to protect that trust. 
 
5.0 FUTURE RESEARCH
 
We are in the midst of a communication technology revolution which is rapidly altering not only the landscape for how researchers collect information about people’s attitudes, opinions, and behaviors, but also the society we study as well. Changes in the way the public communicates and obtains information can have very real effects on the phenomena we seek to measure. In that respect, the fields for study related to mobile devices are vast and varied. Based on our review of the current state of the field, there are some areas should be highlighted for future research -- although this is by no means an exhaustive list of potentially fruitful areas for research:
 
  • Despite the many concerns related to adopting mobile technologies for data collection, many of the longstanding principles researchers need to consider are likely to persist (Schober and Conrad, 2008). These include minimizing coverage error, sampling error, and measurement error as well as specific areas like reducing the likelihood of respondents’ least effort and satisficing strategies, promoting accurate comprehension of the data collection task, seriously considering alternate possible single-mode and multimodal data collection possibilities. In other words, many of the concerns in today’s environment are the same as prior years and will require ongoing inquiry.
  • Focusing on the widespread utility of mobile devices for data collection, there is still a question as to whether or not mobile is a niche methodology. It does appear to be a requirement in order to cover the increase in people taking online surveys via mobile devices and for specialty panels, but does it offer modes of collection robust enough for a general population survey? This remains to be seen.
  • One area of great promise with mobile is the ability to capture data “in-the-moment,” including brief surveys via text, mobile web or app, pictures or videos, scanning information, GPS and the like. This has been an area of some research already, but with somewhat mixed results. Several key questions are still unanswered, such as “Does capturing survey data at the time of a certain behavior or thought result in better quality data than those obtained via a recall survey?,” “If so, at what time interval is in-the- moment better than recall -- hours, days, week after the event or interest?,” “Does in-the- moment capture actually lead to greater non-response if viewed as more of a burden or disruption by the respondent?” “How complex can in-the-moment data capture be when used for repeated measures over time -- survey only, survey plus visuals, survey plus visuals and scanning, etc.?” This is an area with great potential and some encouraging results, yet no final verdict on long-term utility.
  • There is a need to develop best practices based on a growing number of methodologies being used, yet there are still few clear findings to guide such an effort. Among the studies that exist, there are mixed findings on many issues This is due in part to different study designs, but also due to changes in mobile technology over time and “societal learning” and growing comfort with these devices and their many features. It is important to keep in mind that rapid changes in the technology itself may confound evaluation of findings from separate studies appearing within a 6 to 12 month time frame.
  • There is also a need for more assessments of auxiliary data collection capabilities -- GPS, scanning, visual data, wireless devices connected to mobile. While much is known about the mechanics of these various technological tools, there is little published about the uses of these tools as data collection devices to augment or replace surveys or specific survey items. Such studies are needed in terms of respondent cooperation and compliance, data quality, and potential sources of error.
  • Finally, the field requires better understanding of the growing concerns related to privacy and security of data transfers with mobile technologies. These are needed not only to protect respondent but also to craft more understandable and effective consent procedures, statements of risks, and similar documentation. 

6.0  CONCLUSION
 
We are in an era of rapid and continuous change --  and this is the “new normal!” Emerging technologies provide not only opportunities and challenges for researchers, but are also changing the very attitudes, opinions, behaviors and expectations of those we study. Technology change is driving social change which means researchers need to be attuned to the current trends if they wish to be successful. These new specialized tools also have their own “rules,” many of which researchers are still trying to figure out. These approaches may work well with some sets of respondents, but not as well with others. Use is very dependent (as with any measurement tool) on what we want to know and need to measure -- learning to apply the right technology to the problem at hand. In utilizing these new approaches, educate yourself! Then share your findings and lessons with the field -- that is how we all progress and learn in this new era.
 
REFERENCES
 
Abdulazim, J., H. Abdelgawad, K. Nural Habib, and B. Abdulhai (2013). “Using Smartphone Sensor Technologies to Automate Collection of Travel Data.” Travel Behavior (2): 44-52.
 
Andrews, L., R. Bennett, and J. Drennan (2011). “Capturing Affective Experiences Using the SMS Experienced Sampling (SMS-ES) Method.” International Journal of Marketing Research, 53: 479-506.
 
Anhøj, J., and C. Møldrup (2004). “Feasibility of collecting diary data from asthma patients through mobile phones and SMS (short message service): Response rate analysis and focus group evaluation from a pilot study. Journal of Medical Internet Research 6:e42.
 
Armoogum, J, S. Roux, & T.H.T. Pham (2013). “Total nonresponse of a GPS-based Travel Survey.” Paper presented at the Conference on New Techniques and Technologies for Statistics, Brussells..
 
Bailey, J., M. Link, E.N. Bensky, L.Vanno, J. Lai, K. Benezra, and H. Makowska (2011). “Can Your Smartphone Do This: A New Methodology for Advancing Digital Ethnography.” In Proceedings of the American Statistical Association, Survey Research Methods Section (pp. 5762-5773). Alexandria, VA: American Statistical Association.
 
Biler, S, P. Senk, and L. Winkleroba (2013). “Willingness of individuals to participate in a travel behavior survey using GPS devices.” Paper presented at the Conference on New Techniques and Technologies for Statistics, Brussels.
 
Boals, T. and M. Kilger (2013). “The Mechanics of GPS Geo-Location for Mobile Devices - Their Potential for Measurement Error and Some Illustrative Data.” Paper presented at the Annual Conference of the American Association for Public Opinion Research, Boston, MA.
 
Bosnjak, M., G. Metzger, and L. Gräf. (2010). “Understanding the Willingness to Participate in Mobile Surveys: Exploring the Role of Utilitarian, Affective, Hedonic, Social, Self-Expressive, and Trust-Related Factors,” Social Science Computer Review 28: 350-370.
 
Bosnjak, M., T. Poggio, and F. Funke (2013). “Online Survey Participation via Mobile Devices: Findings from Seven Access Panel Studies.” Paper presented at the Annual Conference of the American Association for Public Opinion Research, Boston, MA.
 
Boulos, M. S. Wheeler, C. Tavares, and R. Jones (20110) “How Smartphones are Changing the Face of Mobile and Participatory Healthcare: An Overview with Example from eCAALYX.” BioMedical Engineering Online, 10:24
http://www.biomedical-engineering-online.com/content/10/1/24
 
Boyals, J.(2013). “Privacy and Data Management on Mobile Devices.” Report from the Pew Research Center’s Internet and American Life Project (Sept 5, 2013). Available online at: http://www.pewinternet.org/Press-Releases/2012/Mobile-Privacy.aspx
  
Bristol, K., J. Lai, M. Link (2013). “Usability of App Features and Tutorials.” Paper presented at the Annual Conference of the American Association for Public Opinion Research, Boston, MA.
 
Brenner, P. and J. DeLamater (2012). “Using SMS Text Messaging to Collect Time Use Data.” .” Paper presented at the annual conference of the American Association for Public Opinion Research, Orlando, FL.
 
Buskirk, T.D., Callegaro, M. and Rao, K.  (2010) “‘N the Network'? Using Internet Resources for Predicting Cell Phone Number Status. Social Science Computer Review: Special Issue on Mobile Surveys, Vol. 28, No. 3, 271-286.
 
Buskirk, T.D., Gaynor, M., Andrus, C. and Gorrell, C. (2011) “An App a Day Could Keep The Doctor Away: Comparing Mode Effects for a iPhone Survey related to Health App Use.” American Association of Public Opinion Research, Phoenix, AZ.
 
Buskirk, T.D. and C. Andrus. (2012a). Smart surveys for smartphone: exploring various approaches for conducting online mobile surveys via smartphones. Survey Practice. Available at: http://surveypractice.wordpress.com/2012/02/21/smart-surveys-for-smart-phones/.
 
Buskirk, T.D., and C. Andrus (2012b). “Online Surveys Aren’t Just for Computers Anymore! Exploring Potential Mode Effects Between Smartphone vs. Computer-Based Online Surveys.” Paper presented at the annual conference of the American Association for Public Opinion Research, Orlando, FL.
 
Buskirk, T. D. and Andrus, C. (2012c) “How Often Do You Use the App with a Bird on It? Exploring Differences in Survey Completion Times, Primacy Effects and App Icon Recognition between Smartphone and Computer Survey Modes,” RC 33 Eighth International Conference on Research Methodology, Sydney, AUS.
 
Buskirk, T.D. and C. Andrus (2014) “Making Mobile Browser Surveys Smarter:    Results from a Randomized Experiment Comparing Online Surveys Completed via Computer or Smartphone,” Forthcoming in Field Methods, Vol. 26 No. 4.
 
Buskirk, T.D., L. Walton, and T. Wells (2013). “To Complete by Smartphone or by Tablet or by Computer or by Paper & Pencil – That is the Question: Exploring Factors Associated with Respondent Mode Choice for Multimode Surveys.” Paper presented at the Annual Conference of the American Association for Public Opinion Research, Boston, MA.
 
Buskirk, T.D. (2013) “Smarter Smartphone Surveys 201: Data Collection Methods and Survey Design Considerations,” Webinar presented for the American Association of Public Opinion Research. Available at http://www.aapor.org/source/education/webinar_recordings.cfm#.UsmPYNrnYfg
 
Byass, P., S. Hounton, M. Ouedraogo, H. Some, I. Diallo, E. Fottrell, A. Emmelin, and N. Meda (2008). “Direct data capture using hand-held computers in rural Burkina Faso: Experiences, benefits, and lessons learned.” Tropical Medicine & International Health 13:25-30.
 
Callegaro, M. (2002). The cellular phone situation in Italy: Coverage, Frames & Billing Systems.  Roundtable Presentation at the Joint WAPOR/AAPOR 57th Annual Conference of the American Association for Public Opinion Research, St. Petersburg Beach, FL.
 
Callegaro, M. (2010(. Do you know which device your respondent has used to take your online survey? Survey Practice. Available at:   http://surveypractice.wordpress.com/2010/12/08/device-respondent-has-used/.
 
Callegaro, M. and T. Macer. (2011). Designing surveys for mobile devices: pocket-sized surveys and yield powerful results. Short-course presented at the annual meeting of the American Association for Public Opinion Research, Phoenix, AZ.
 
Carrion, C., Pereira, F., Ball, R. et al. (2014). “Evaluating FMS: A Preliminary Comparison with a Traditional Travel Survey.” Paper presented at the Transportation Research Board Annual Meeting, Washington, DC.
 
Comer, P. and Ted Saunder (2012). “Technical Impact of Mobile Devices.” Presentation at the Council of American Survey Research Organizations Technology Conference, New York, NY.
 
Conrad, F., M. Schober, C. Zhang, H. Yan, L. Vickers, M. Johnston, A. Hupp, L. Hemingway, S. Fail (2013). “Mode choice on an iPhone increases survey data quality.” Paper presented at the Annual Conference of the American Association for Public Opinion Research, Boston, MA.
 
Cottrill, C. (2014). “Considering Smartphones: User Attitudes Towards Privacy and Trust in
Location-Aware Applications.” Paper presented at the Transportation Research Board 2014 Annual Meeting, Washington, DC.
 
Crawford, S., C. McClain, S. O'Brien, and T. Nelson (2013). “Examining the Feasibility of SMS as a Contact Mode for a College Student Survey.” Paper presented at the Annual Conference of the American Association for Public Opinion Research, Boston, MA.
 
De Nazell, A., E. Seto, D. Donaire-Gonzalez, et al (2013). “Improving Estimates of Air Pollution Exposure Through Ubiquitous Sensing Technologies.” Environmental Pollution 176: 92-99.
 
Down, J. and S. Drake. (2003). SMS polling: A methodological review. Paper presented at the Fourth ASC International Conference on Survey and Statistical Computing, Warwick, UK, Sept 17-19.
 
Driscoll, H., J. Dayton, and A. Foushee (2013). “The iPad® Computer-Assisted Personal Interview system - A Revolution for In-Person Data Capture?” Paper presented at the Annual Conference of the American Association for Public Opinion Research, Boston, MA.
 
Duggan, M. (2013). “Photo and Video Sharing Grow Online.” Report from the Pew Research Center’s Internet and American Life Project (Oct 28, 2013). Report available at: www.pewinternet.org/reports/2013/photos-and-videos.aspx.
 
Duggan, M. and A. Smith (2013). “Cell Internet Use 2013.” Report from the Pew Research Center’s Internet and American Life Project (Sept 16, 2013). Report available at: www.pewinternet.org/reports/2013/Cell-Internet.aspx.
 
Dwolatzky, B., E. Trengove, H. Struthers, J. McIntyre, and N. Martinson (2006). “Linking the global positioning system (GPS) to a personal digital assistant (PDA) to support tuberculosis control in South Africa: A pilot study.” International Journal of Health Geographies 5:34.
 
Fuchs, M., (2008). “Mobile Web Surveys: A Preliminary Discussion of Methodological Implications”. In F. Conrad & M. Schober (Eds.), Envisioning the Survey Interview of the Future (pp. 77-94). Hoboken, NJ: Wiley.
 
Furberg, R., D. Schulman, A. Zhang, P. Kizakevich, R. Whitmore, S. Duncan, and J. Levinsohn (2007). “ Mobile and Bluetooth Wireless Technologies in Longitudinal Surveys of Human Exposure-Related Behavior.” Paper presented at the 2007 FedCASIC Conference, Washington, DC.
 
Gluck, A. (2012). “Do Surveys That Are Completed on Mobile Devices Differ From Surveys Completed Online, Over the Phone or via Mail?” Paper presented at the annual conference of the Midwest Association for Public Opinion Research. Chicago, IL.
 
Gotschi, E., R. Delve, and B. Freyer (2009). “Participatory photography as a qualitative approach to obtaining insights into farmer groups,” Field Methods 21:290-308.
 
Graham, P., and C. Cobb (2013). “Comparison of Instantaneous Mobile Time Use Data Collection Methods to Traditional Time Diary Methods.” Paper presented at the Annual Conference of the American Association for Public Opinion Research, Boston, MA.
 
Gregoski, M., M. Mueller, A. Vertegel, et al. (2012). “Development and Validation of a Smartphone Heart Rate Acquisition Application for Health Promotion and Wellness Telehealth Applications.” International Journal of Telemedicine and Applications, doi:10.1155/2012/696324.   http://www.hindawi.com/journals/ijta/2012/696324/
 
Guidry, K. (2012). “Response Quality and Demographic Characteristics of Respondents Using a Mobile Device on a Web-Based Survey.” Paper presented at the annual conference of the American Association for Public Opinion Research. Orlando, FL.
 
Johnson, E.P., C. Shea, M. Roberts, and W. Haddlock (2013). “Matching Data Collection Method to Purpose: In the Moment Data Collection with Mobile Devices for Occasioned Based Analysis.” Survey Practice, 6(1).
 
Jones, P., R. Drury, and J. McBeath (2011). “Using GPS-Enabled Mobile Computing to
Augment Qualitative Interviewing: Two Case Studies.” Field Methods, 23: 1730187.
 
Kelly, S. Murphy (2012). “Mobile Payments May Replace Cash, Credit-Cards by 2020!” Mashable (e-journal, Apr 17, 2012). Available at: http://mashable.com/2012/04/17/mobile- payments-2020/
 
Kuntsche, E. and B. Robert. (2009). “Short Message Service (SMS) Technology in Alcohol Research – A Feasability Study.” Alcohol &Alcoholism 44: 423-428.
 
Lai, J., L. Vanno, M. Link, J. Pearson, H. Makowska, K. Benezra, M. Green. (2010). “Life360: Usability of Mobile Devices for Time Use Surveys” Survey Practice, February: www.surveypractice.org.
 
Link, M.W. and T.D. Buskirk. (2012). The role of new technologies in powering, augmenting, or replacing traditional surveys. Short-course presented at the annual meeting of the American Association for Public Opinion Research, Orlando, FL.
 
Link, M. (2012). “Survey Enhancements and Data Privacy in a Mobile, Social World.” Plenary Presentation at the International Field Directors and Technologies Conference, Orlando, FL, May 21.
 
Link, M., J. Lai, and L. Vanno (2012). “Smartphone Applications: The Next (and Most Important?) Evolution in Data Collection.” Paper presented at the 67th Annual Conference of the American Association for Public Opinion Research, Orlando, FL, May 17-20.
 
Link, M. (2013). “Measuring Compliance in Mobile Longitudinal Repeated-Measures Design Study.” Survey Practice.
 
Link, Michael, Jennie Lai, and Kelly Bristol (2013). “Accessibility or Simplicity? How Respondents Engage with a Multiportal (Mobile, Tablet, Online) Methodology for Data Collection.” Paper presented at the Annual Conference of the American Association for Public Opinion Research, Boston, MA.
 
Lotan, T., Musicant, O., Grimberg, E. (2014). “Can Young Drivers Be Motivated to Use Smartphone-Based Driving Feedback?” Paper presented at the Transportation Research Board Annual Meeting, Washington, DC.
 
Macer, Tim (2012). “Developments and Impact of Smart Technology.” International Journal of Market Research, 54: 567-570.
 
Manchin, R. and F. De Keulenaer (2013). “Envisioning the 'survey' of the future: the role of smartphones and tablets in face-to-face interviewing.” Paper presented at the Annual Conference of the American Association for Public Opinion Research, Boston, MA.
 
Maruyama, T., S. Mizokimi, E. Hato (2014). “A Smartphone-based Travel Survey Trial Conducted in Kumamoto, Japan: An Examination of Voluntary Participants Attributes.” Paper presented at the Transportation Research Board Annual Meeting, Washington, DC.
 
Mavelova, A. (2013). “Data Quality in PC and Mobile Web Surveys.” Social Science Computer Review, 31: 725-743.
 
McCalin, C., S. Crwford, and J. Dugan (2012). “ Use of Mobile Devices to Access Computer- Optimized Web Instruments: Implications for Respondent Behavior and Data Quality.” Paper presented at the annual conference of the American Association for Public Opinion Research, Orlando, FL.
 
McGeeney, K.,and J. Marlar (2013). “Mobile Browser Web Surveys: Testing response rates, data quality and best practices.” Paper presented at the Annual Conference of the American Association for Public Opinion Research, Boston, MA.
 
McClain, C., and S. Crawford (2013). “Grid Formats, Data Quality, and Mobile Device Use: A Questionnaire Design Approach.” Paper presented at the Annual Conference of the American Association for Public Opinion Research, Boston, MA.
 
McClendon, M.J., and D.J. O’Brien (1988). “Question-order effects on the determinants of subjective well-being.” Public Opinion Quarterly 52:351-364.
 
Mendelson, J., Gibson, J. L., and Romano Bergstrom, J. (2013). “Effects of Displaying Videos on Measurement in a Web Survey.” Paper presented at the Annual Conference of the American Association for Public Opinion Research Annual Conference, Boston, MA.
 
Mendelson, J., and J. Romano Bergstrom (2013). “Age Differences in the Knowledge and Usage of QR Codes.”  Proceedings of the 7th International Conference, UAHCI 2013, Las Vegas, NV, USA, pp. 156-161.
 
Mendelson, J., Lackey, M., Turner, S. (2013): What is that thing? Knowledge and usage of QR codes. In New Frontiers: Smart Data Collection – Innovations in the Use of Smartphones. Paper presented at the Annual Conference of the American Association for Public Opinion Research, Orlando, FL
 
Millar, M., and D. Dillman (2012). “Encouraging Survey Response via Smartphones: Effects on Respondents’ Use of Mobile Devices and Survey Response Rates.” Survey Practice: 5 (No 3).
 
Morgan Stanley (2010). “Internet Trends” Report available at:
http://www.morganstanley.com/institutional/techresearch/pdfs/Internet_Trends_041210.pdf 
Okazaki, S. (2007). Assessing mobile-based online surveys. International Journal of Market Research, 49(5), 651-675.
 
Olson, K, and J. Wagner (2013). “A Field Experiment Using GPS devices to measure Interviewer Travel Behavior.” Paper presented at the annual conference of the American Association for Public Opinion Research, Boston, MA.
 
Peterson, G. (2012). “Unintended Mobile Respondents.” Paper presented at the American
Council of American Survey Research Organizations Technology Conference, New York, NY.
 
Petras, A., S. Duan, and O. Dan (2013). “Cross-Platform Measurement: User Experience with a Smartphone and Web Self-Reported Data Collection Application.” Paper presented at the annual conference of the American Association for Public Opinion Research, Boston, MA.
 
Peytchev, A., and C. Hill. (2010). “Experiments in Mobile Web Survey Design: Similarities to Other Modes and Unique Considerations,” Social Science Computer Review 28: 319-335.
 
Purcell, K. (2010). “The Rise of Apps Culture.” Report from the Pew Research Center’s Internet and American Life Project (Sept 15, 2010). Available online at: http://www.pewinternet.org/Reports/2010/The-Rise-of-Apps-Culture.aspx.
 
Raento, M., A Oulasvirta, and N Eagle. (2009). “Smartphones: An Emerging Tool for Social Scientists.” Sociological Methods & Research, 37: 426-54.
 
Rainie, L. (2013). “Anonymity, Privacy, and Security Online.” Report from the Pew Research Center’s Internet and American Life Project (Sept 5, 2013). Available online at: http://www.pewinternet.org/Press-Releases/2013/Anonymity-Privacy-and-Security-Online.aspx
 
Rainie, L., and A. Smith (2013). “Tablet and E-Reader Ownership Update.” Report from the Pew Research Center’s Internet and American Life Project (Oct 18, 2013). Available online at: http://www.pewinternet.org/Reports/2013/Tablets-and-ereaders.aspx
 
Reiter, T., A. Kraver, E. Stadler, C. Geyer, and M. Fallendorf (2012). “Usability of Tablet Computers in Travel Surveys.” Paper presented at the Transportation Research Board Annual Meeting, Washington, DC.
 
Return Path (2013) “Email Mostly Mobile,” E-Report available at: http://www.returnpath.com/wp-content/uploads/resource/email-mostly-mobile/Return-Path- Email-Mostly-Mobile1.jpg
 
Roe, D., Y. Zhang, and M. Keating (2013). “Piloting a Mobile Data Collection Application: SurveyPulseTM, by RTI International.” Paper presented at the Annual Conference of the American Association for Public Opinion Research, Boston, MA.
 
Runyan, J., T. Steenbergh, C. Bainbridge, D. Daughtry, L. Oke, and B. Fry (2013). “A Smartphone Ecological Momentary Assessment/Intervention App for Collecting Real-time Data and Promoting Self-Awareness.” PLoS ONE 8(8): e71325.
 
Saunders, T., K. Chrzan, and K. Luck (2012). “Scale Orientation, Number of Scale Points and Grids in Mobile Web Surveys.” Paper presented at the annual conference of the American Association for Public Opinion Research, Orlando, FL.
 
Scagnelli, J., J. Bailey, M. Link, H. Moakowska, and K. Benezra (2012). “On the Run: In the Moment Smartphone Data Collection.” Paper presented at the annual conference of the American Association for Public Opinion Research, Orlando, FL.
 
Scherpenzeel, AC, M. Morren, N. Sonck, & H. Fernee. (2012). Time use data collection using smartphones: Results of a pilot study among experienced and inexperienced users.” Paper presented at the annual conference of the American Association for Public Opinion Research, Orlando, FL.
 
Schober, M.F., & Conrad, F.G. (2008). Survey interviews and new communication technologies. In F.G. Conrad & M.F. Schober (Eds.), Envisioning the survey interview of the future (pp. 1-30). New York: Wiley.
 
Schober, M., F. Conrad, C. Antoun, A. Bowers, A. Hupp, H. Yan (2013). “Conversational Interaction and Survey Data Quality in SMS Text Interviews.” Paper presented at the Annual Conference of the American Association for Public Opinion Research, Boston, MA.
 
Schon, D., M. Klinger, S. Kopf, W. Effelsberg (2012). “MobileQuiz - A Lecture Survey Tool Using Smartphones and QR Tags.” International Journal of Digital Information and Wireless Communications, 2: 231-244.
 
Shea, C., M. Roberts, E. Johnson, and W. Hadlock (2013). “Matching Data Collection Method for Purpose: In the Moment Data Collection with Mobile Devices for Occasioned-Based Analysis.” Survey Practice. 6 (1).
 
Shin, D., J. Jung, and B.-Hee Chang (2012). “The Psychology Behind QR Codes: User
Experience Perspective.” Computers in Human Behavior, 28: 1417-1426.
 
Smith, A. (2013). “Smartphone ownership.” Report from the Pew Research Center’s Internet and
American Life Project (Jun 5, 2013). Report available at:
www.pewinternet.org/reports/2013/Smaartphone-Ownership-2013.aspx.
 
Stapelton, C. (2012). “Understanding Smartphone Usage to Take Web Surveys: A cross country analysis. Paper presented at the annual meeting of the American Association for Public Opinion Research, Orlando, FL.
 
Stapleton, C. (2011). “The Smart(phone) Way to Collect Survey Data.” Paper presented at the annual meeting of the American Association for Public Opinion Research, Phoenix, AZ.
 
Steeh, C., Buskirk, T. D., & Callegaro, M. (2007). Using text messages in U.S. mobile phone surveys. Field Methods, 19, 59–75.
 
Tarkus, A. (2009). Usability of mobile surveys. In E. Maxl, N. Döring, & A. Wallisch (Eds.), Mobile market research (pp. 134-160). Herbert Von Halem Verlag.
 
Tourangeau, R., L.J. Rips, and K. Rasinski (2000). The psychology of survey response. Cambridge, UK: Cambridge University Press.
 
Tourangeau, R., Couper, M. F., Conrad, F. G. (2004). Spacing, position, and order: interpretive heuristics for visual features of survey questions. Public Opinion Quarterly, 68, 368 – 393.
 
Vanno, L., J. Lai, and M. Link (2012). “Assessing Data Quality and Respondent Compliance in a Smartphone App Survey.” Paper presented at the annual conference of the American Association for Public Opinion Research, Orlando, FL.
 
Virtanen, V., T. Sirkia, and V. Iokiranta (2007). Reducing Nonresponse by SMS Reminders in Mail Surveys.” Social Science and Computer Review, 25: 384-395.
 
Walton, L., T. Buskirk, and T. Wells (2013). “Smarter Online Panels for Smartphone Users: Exploring Factors Associated with Mobile Panel Participation.” Paper presented at the Annual Conference of the American Association for Public Opinion Research, Boston, MA.
 
Wang, C., M.A. Burris, and X.Y. Ping (1996). “Chinese village women as visual
anthropologists: A participatory approach to reaching policymakers.” Social Science & Medicine 42:1391-1400.
 
Weber, M., Denk, M., Oberecker, K., Strauss, C., & Stummer, C. (2008). Panel surveys go mobile. International Journal of Mobile Communication, 6, 88-107.
 
Wells, T., J. Bailey, and M. Link (2012a). “A Direct Comparison of Mobile vs. Online Survey Modes.” Paper presented at the annual conference of the American Association for Public Opinion Research, Orlando, FL.
 
Wells, T., J. Bailey, and M. Link (2012b). “Filling the Void: Gaining a Better Understanding of Tablet-Based Surveys.” Survey Practice 6(1).
 
Wolf, J., R. Guensler, and W. Bachman (2001). “Elimination of the travel diary: Experiment to derive trip purpose from global positioning system travel data.” Transportation Research Record: Journal of the Transportation Research Board 1768:124-134.
 
Yin, E., Li, P., Fang, J., and Qiu, T. (2014). “Evaluation of Vehicle Positioning Accuracy by Using GPS-Enabled Smartphones.” Paper presented at the Transportation Research Board Annual Meeting, Washington, DC.
 
Ythier, J., Walker, J., Bierlaine, M. (2013). “The Influence of Social Contacts and Communication Use on Travel Behavior: A Smartphone-Based Study.” Paper presented at the Transportation Research Board Annual Meeting, Washington, DC.

Zickuhr, K. (2012). “Three-Quarters of Smartphone Owners Use Location-based Services.” Report from the Pew Research Center’s Internet and American Life Project (May 11, 2012). Available online at: http://www.pewinternet.org/Reports/2012/Location-based-services.aspx