Instrumented Usability Analysis for Mobile Devices

Instrumented Usability Analysis for Mobile Devices

Andrew Crossan, Roderick Murray-Smith, Stephen Brewster, Bojan Musizza
DOI: 10.4018/978-1-60960-499-8.ch001
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Instrumented usability analysis involves the use of sensors during a usability study which provide observations from which the evaluator can infer details of the context of use, specific activities or disturbances. This is particularly useful for the evaluation of mobile and wearable devices, which are currently difficult to test realistically without constraining users in unnatural ways. To illustrate the benefits of such an approach, we present a study of touch-screen selection of on-screen targets, whilst walking and sitting, using a PocketPC instrumented with an accelerometer. From the accelerometer data the user’s gait behaviour is inferred, allowing us to link performance to gait phase angle, showing there were phase regions with significantly lower error and variability. The article provides examples of how information acquired via sensors gives us quantitatively measurable information about the detailed interactions taking place when mobile, allowing designers to test and revise design decisions, based on realistic user activity.
Chapter Preview
Top

Introduction

Mobile and wearable devices are becoming increasingly important in our daily lives, and there is a correspondingly large activity in the design of interaction for these devices. It is obviously very important to be able to evaluate their usability, but by their very nature, these devices are intended for use in mobile settings, not for use by someone seated in a usability lab.

As described by (Kjeldskov & Stage, 2004), there is a wealth of guidelines for running laboratory-based usability studies, but these studies will lack realism for mobile devices. To test mobile devices in mobile settings, however, we are required to use field-based evaluations, which are far from straightforward to implement. Kjeldskov and Stage’s review of the literature points out three difficulties: 1. It is difficult to define a study that captures the use-scenario, 2. It is hard to use many established evaluation techniques, and 3. Field evaluations complicate data collection, and limit experimental control. Examples of papers where researchers have proposed additional techniques such as distance walked and percentage preferred walking speed to assess usability include (Brewster, 2002), (Petrie, Furner, & Strothotte, 1998), and (Pirhonen, Brewster, & Holguin, 2002), using a mix of qualitative questions and manual recording of walking pace. (Mizobuchi, Chignell, & Newton, 2005) examine the effect of key size on handheld devices while walking.

(Barnard, Yi, Jacko, & Sears, 2005) review the differences between desktop and mobile computing, and they observe for researchers aiming to isolate the effects of motion from other contaminants, the idea of such uncontrolled studies can be daunting. Control is critical for empirical data collection methods employing the scientific method.

Roto et al. (2004) discuss the use of Quasi-experimentation based on best possible control over nuisance variables, coupled with recordings of the user, interaction with the device and environment. The innovation in their recordings was the use of multiple cameras worn around the body of the user, and attached above the screen of the mobile device. This does make the recording process obtrusive and might change both user behaviour and that of people in the environment around them. It is also time-consuming to analyse after the experiment. This recording arrangement has been used successfully in (Oulasvirta, Tamminen, Roto, & Kuorelahti, 2005) to investigate the fragmentation of attention in mobile interaction.

Top

Instrumented Usability Analysis

Here, we define ‘Instrumented usability analysis’ as the use of sensors during a usability study which provide observations from which the evaluator can infer details of the context of use, or specific activities or disturbances.

Sensors such as accelerometers, magnetometers and GPS systems have been added to mobile devices, and are now in mass-production in mobile phones. These have been included for informing the user (about location, number of steps taken), or giving the user novel input mechanisms, such as gesture recognition or input for game playing.

There are many examples of both prototype and commercially available sensors and sensor packs for motion or context sensing. (Fishkin, Jiang, Philipose, & Roy, 2004) describe a system for detecting interactions with RFID technology and suggest it can be used to infer user movement by examining signal strengths from a sensor network. (Gemmell, Williams, Wood, Lueder, & Bell, 2004) describe the SenseCam system used to capture life experiences without having to operate complex recording equipment. SenseCam combines a camera with a group of sensors including an accelerometer, infrared, light and temperature sensors and a clock to automatically detect, photograph and map out changes in context or events during a persons day. (Kern & Schiele, 2003) describe a hardware platform combining multiple wearable accelerometers in order to infer the user’s context and actions. They demonstrate how these acceleration signals can be used to classify user activity into actions such as sitting, standing, walking, shaking hands and typing.

Complete Chapter List

Search this Book:
Reset