Large Scale User Trials: Research Challenges and Adaptive Evaluation

Large Scale User Trials: Research Challenges and Adaptive Evaluation

Scott Sherwood, Stuart Reeves, Julie Maitland, Alistair Morrison, Matthew Chalmers
DOI: 10.4018/978-1-60960-499-8.ch008
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The authors present a reflection on a series of studies of ubiquitous computing systems in which the process of evaluation evolved over time to account for the increasing difficulties inherent in assessing systems ‘in the wild’. Ubiquitous systems are typically designed to be embedded in users’ everyday lives; however, without knowing the ways in which people will appropriate the systems for use, it is often infeasible to identify a predetermined set of evaluation criteria that will capture the process of integration and appropriation. Based on the authors’ experiences, which became successively more distributed in time and space, they suggest that evaluation should become adaptive in order to more effectively study the emergent uses of ubiquitous computing systems over time.
Chapter Preview
Top

Introduction

When working with ubiquitous computing (Ubicomp) systems, challenges and rewards arise from moving from the relative safety of the usability lab into the uncontrolled environment of everyday life. For example, unpredicted contexts of use and environmental features such as intermittent network connectivity may challenge traditional evaluation methods, and yet we gain the mobility, contextuality and appropriation that let users take full advantage of new mobile devices. As Carter and Mankoff (2007) put it, “Ubicomp systems [are] more difficult to evaluate than desktop applications. This difficulty is due to issues like scale and a tendency to apply Ubicomp in ongoing, daily life settings unlike task and work oriented desktop systems.” Many of these challenges have already been faced by researchers studying the use (rather than usability) of Ubicomp technologies in the wild. Observational techniques founded in ethnography may be well suited in principle but in practice are often hampered because of the difficulty of actually observing users’ activities. Small devices such as mobile phones and PDAs can easily be occluded from view, and people’s use may be intimately related to and influenced by the activity of others far away (Crabtree et al., 2006).

In this chapter, we reflect on our studies of four mobile multiplayer games: Treasure (Barkhuus et al., 2005), Feeding Yoshi (Bell et al., 2006), Ego and Hungry Yoshi (McMillan et al, 2010) and of two everyday awareness applications: Shakra (Maitland et al., 2006) and Connecto (Barkhuus et al., 2008). The development of these systems has spanned the last seven years, with user experience design and evaluation techniques evolving over this time. We show a progression from early trials lasting around a quarter of an hour and taking place within a specific confined area, to trials months or years in length (indeed, often without a specified end date) that explore users’ integration of technology into their everyday lives. Studying system use over longer periods of time and in less constrained settings provides greater opportunity for witnessing unanticipated behaviour as users take ownership of the system, but can leave the evaluator more detached from the trial. Additionally, while many have studied the effects of uncertainty with regard to positioning accuracy and network connectivity on the user experience e.g.,(Crabtree et al. (2004)), the impact these factors have on evaluators is not usually explicitly acknowledged.

Here we discuss the strategies that we, as evaluators, employed to discover participants’ reactions and experiences with regard to our five systems. The studies are presented chronologically, as the challenges faced in one study often influenced design and evaluation of subsequent systems. We suggest methods for keeping evaluators informed of activity during a trial that might take place over an extended period of time and over a wide geographical area, and suggest that such information is of crucial importance to adaptation of an ongoing evaluation based on evaluators’ continual involvement with it or, in more extreme cases, immersion in it. Such adaptation may be done in order to inform and improve ongoing and post-hoc analysis. To conclude the paper we discuss the temporal and geographic scale of each study as contributing factors to the complexity of running such studies, and of gathering and interpreting evaluation data.

Complete Chapter List

Search this Book:
Reset