Meet your Users in Situ Data Collection from within Apps in Large-Scale Deployments

Meet your Users in Situ Data Collection from within Apps in Large-Scale Deployments

Nikolaos Batalas, Javier Quevedo-Fernandez, Jean-Bernard Martens, Panos Markopoulos
Copyright: © 2015 |Pages: 16
DOI: 10.4018/IJHCR.2015070102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Increasingly, ‘app-store' releases of software are used as a vehicle for large-scale user trials ‘in the wild'. Several opportunities and methodological challenges arise from having little or no access to users, other than through the application itself. So far, researchers have needed to hardcode survey items into the software application studied, which is laborious and error prone. This paper discusses how these problems are addressed using TEMPEST, a platform for longitudinal in situ data collection. The authors illustrate the use of TEMPEST to study the deployment and real-world use of a tablet application called idAnimate; this application has been designed to support the creation of simple animations as design representations during the creative design process. The authors discuss how the tool has supported the gathering of data in over 4000 installations, both from a development and research perspective, and relate their experiences to current research perspectives on large-scale app trials.
Article Preview
Top

Introduction

Usability and user experience testing typically requires the recruitment of test participants who represent as closely as possible the intended users of the system being evaluated. Whether evaluating systems in lab or field conditions, evaluators attempt to ensure the realism of their tests within the constraints imposed by the test set up. In this way, they aim to assess how users experience a system and to identify the problems that would occur in actual use. But such tests can be limited in their representation of actual use. For example, in the short time of testing, participants will usually not have the opportunity to develop their own use strategies, or become expert users of the application (Henze, Pielot, Poppinga, Schinke, & Boll, 2011). It is for this reason that field studies, where users are exposed to the system for longer periods and in actual settings, have been heralded as the gold standard for evaluations and evaluative user research (Carter, Mankoff, Klemmer, & Matthews, 2008).

However, even field studies are by their nature limited in duration and sample size due to factors such as cost or availability of personnel. These limitations make it difficult to address larger populations of test participants and sampling bias may be unavoidable (Dufau et al., 2011). Last but not least, reactivity may also occur even with the most careful evaluators (Brown, Reeves, & Sherwood, 2011), with participants adjusting their behaviour during trials to meet what they interpret to be the expectations of the researchers. All the above arguments suggest that from the users’ perspective, field trials are still fundamentally different from actual installation and usage of a system on their own initiative and for their own purposes.

Distribution channels modeled as app stores open up new opportunities to reach large numbers of users and gather insights from actual use, lending ecological validity to results. Such evaluation of app-store-released software takes place on user-owned hardware, and relieves researchers from the burden of providing and supporting devices along with the application software (McMillan, Morrison, Brown, Hall, & Chalmers, 2010). Interestingly, deploying software through app stores also allows researchers to release applications as probes in a wider research context. In such cases, having their application put to actual use in the wild, allows researchers to investigate not just how a particular application can be, but also to achieve a better understanding of how users function in a wider environment, which the particular software application is only a part of.

Using an application distributed at a large scale can inform further development of apps by functioning both as a conversation piece and as the medium through which a dialogue between developer (or researcher) and user occurs, (Kranz, Murmann, & Michahelles,2013). However, techniques for surveying users during wide deployments for actual use are still open to exploration. Best practices have yet to be established and researchers are confronted with challenges at many levels. Cramer, Rost, Belloni, & Bentley (2010) identified the following challenges for researchers:

  • I.

    Collecting the data they wish from the targeted user group compared with traditional research methods.

  • II.

    Coping with a diversity of platforms, devices and overcoming relevant technical challenges.

  • III.

    Deciding at what stage of development a research prototype can be released to maximize the value of user feedback.

  • IV.

    Taking care of development costs, server operating costs, and technical support issues.

  • V.

    Ensuring an ethical research approach.

In this paper we discuss a generic platform-independent approach for gathering application usage data as well as explicit user survey data, during large-scale deployments, over sustained periods, within the actual use context.

With regard to selectively collecting the data aimed for (i), we advocate progressively refining queries to users taking into account the data assembled already, which is currently not facilitated by standard app store features and application development frameworks.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing