Going DEEP: Public, Iterative Release as a Mobile Research Strategy

Going DEEP: Public, Iterative Release as a Mobile Research Strategy

Andrew Dekker (University of Queensland, Australia), Justin Marrington (University of Queensland, Australia) and Stephen Viller (University of Queensland, Australia)
Copyright: © 2013 |Pages: 17
DOI: 10.4018/978-1-4666-4054-2.ch001
OnDemand PDF Download:
$37.50

Abstract

Unlike traditional forms of Human-Computer Interaction (such as conducting desktop or Web-based design), mobile design has by its nature little control over the contextual variables of its research. Short-term evaluations of novel mobile interaction techniques are abundant, but these controlled studies only address limited contexts through artificial deployments, which cannot hope to reveal the patterns of use that arise as people appropriate a tool and take it with them into the varying social and physical contexts of their lives. The authors propose a rapid and reflective model of in-situ deployment of high-fidelity prototypes, borrowing the tested habits of industry, where researchers relinquish tight control over their prototypes in exchange for an opportunity to observe patterns of use that would be intractable to plan for in controlled studies. The approach moves the emphasis in prototyping away from evaluation and towards exploration and reflection, promoting an iterative prototyping methodology that captures the complexities of the real world.
Chapter Preview
Top

Introduction

The field of Human-Computer Interaction (HCI) is seen to be in a perpetual state of identity crisis. To a point, this is something that can be expected. Contributors to the shared body of research come from a startlingly diverse set of backgrounds: software engineering, operating system development, ethnography, sociology, cognitive science, the arts, design, journalism, and media theory, each authoring important works in our pantheon. Each contribution has its own perspective on the field.

In 2009, long-time SIGCHI contributor James Landay published a frustrated missive on what he saw as the fundamentally incorrect approaches of reviewers for CHI (ACM SIGCHI Conference on Human Factors in Computing Systems) and UIST (ACM Symposium on User Interface Software and Technology) conferences (Landay, 2009). Landay asserts that “systems work”—that is, design research involving the design, build, and release of actual software—is significantly harder to publish in comparison to short, artificial deployments aimed directly at evaluations. The unproductive focus on evaluation as the yardstick for research success, he said, was a strong disincentive to do real work, since more significant work requires more exhaustive evaluation. One commenter summed up the problem perfectly: “I encounter more innovation scanning Techmeme these days than I do at the average conference.”

Landay’s post echoed a familiar cry: in recent years many well-established academics in the field have expressed their concerns that a fatal combination of the pressure to publish and the obsession with evaluation leads too many graduate students towards trivial deployments and away from actual innovation. When examining user evaluation methods, Buxton and Greenberg invoked the classical “considered harmful” maxim against applying usability evaluation thoughtlessly, and blast the conferences for explicitly including “evaluation validity” as a guideline for publication (Buxton & Greenberg, 2009). Lieberman rails against the “tyranny” of evaluation: “There is no ISO standard human,” he says, and trying to design studies around that assumption has given user interface research a “bad case of physics envy” (Lieberman, 2003). At UIST, Olsen spoke plainly: without the added context of a real system, a usability evaluation is a trap that reduces all interactions into standardized problems reduced to the minimum of complexity and scale (Olsen, 2007).

This methodological miasma is nowhere more apparent than when trying to design software for mobile applications. It is difficult enough to bridge gaps between simulated and real interaction within the well-defined boundaries of the desktop or workplace contexts. As with other areas within the ubiquitous computing discipline, mobile applications must contend with the additional complexity of a perpetually shifting context coupled with the difficult-to-predict relationship between their interaction patterns and the shifting reliability of the actors that drive them. It would be difficult to describe a design problem less suited to artificial deployment, but when we examine mobile-focused publications, we find traditional evaluation methods in use: emphasis on getting to the evaluation, often misdirected evaluation, as quickly as possible.

As mobile designers and developers, our discipline is uniquely situated to demonstrate the plausibility of real systems work as an effective research approach. Mobile applications are best when they are limited in scope (due to the modal nature of mobile operating systems demanding that each application do one thing well), and their deployments are therefore easier to manage than complex systems. Furthermore, as researchers we can make use of the same high-level toolsets and distribution platforms that have allowed self-taught mobile developers to push their ideas to millions of users.

The focus of this chapter is threefold: first, to demonstrate how artificial evaluations are more of an issue in Mobile HCI than they are in other more definable contexts; second, to extract what must go into a piece of mobile design research to give it a chance of being both innovative and rigorous; and finally, to propose a model for mobile HCI research that uses early deployment as a facilitator for the more traditional design methods, enabling them to go further and explore the subtle problems that can only be revealed by actual, uncontrolled use.

Complete Chapter List

Search this Book:
Reset