Usability Evaluation Meets Design: The Case of bisco Office™

Usability Evaluation Meets Design: The Case of bisco Office™

Judith Symonds
DOI: 10.4018/978-1-60566-040-0.ch010
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Usability Evaluation Methods (UEM) are plentiful in the literature. However, there appears to be a new interest in usability testing from the viewpoint of the industry practitioner and renewed effort to reflect usability design principles throughout the software development process. In this chapter we examine one such example of usability testing from the viewpoint of the industry practitioner and reflect upon how usability evaluation methods are perceived by the software developers of a content driven system and discuss some benefits that can be derived from bringing together usability theory and usability evaluation methods protocols used by practitioners. In particular, we use the simulated prototyping method and the “Talk Aloud” protocol to assist a small software development company to undertake usability testing. We propose some issues that arise from usability testing from the perspectives of the researchers and practitioners and discuss our understanding of the knowledge transfer that occurs between the two.
Chapter Preview
Top

Usability Evaluation Methods

Usability is a multidisciplinary field that falls under the larger umbrella of Human-Computer Interaction (HCI) where there are essentially two camps: design and evaluation (Wania, Atwood, & McCain, 2006). In their bibliographic citation analysis of the HCI literature, Wania et al. (2006) suggest that the two camps can learn from each other implying that they are separate and noninclusive. But, in industry, are usability design and evaluation really so far apart? Even relatively early research (Sullivan, 1989) called for usability to be considered throughout the entire software design process and suggested that considering only usability evaluation or testing is a “narrow conception” of usability. More recently, a study of Dutch IT companies found that in the software industry and the IT industry in general, there is variance in whether design is a consideration throughout the development process. In the study it was found that the usability of the system is often only addressed in the latter stages of the process (Gemser, Jacobs, & Ten Cate, 2006). The study also showed that developers of content-driven systems were more likely to consider usability throughout the whole design process. The authors suggest that this is related to the amount of influence customers have over the design process.

Hartson, Andre, and Williges (2003) conducted a thorough review of User Evaluation Methods (UEMs) for usability and even went so far as to establish a criteria for selection of a most suited UEM and provide guidance on the number of tests needed to ensure the reliability of the process. Carter (2007) advocates the simplicity of usability testing and suggests that the academic community have become too wrapped up in the protocols of UEMs and have lost sight of the usefulness of usability testing. The expectations of the practitioner and the usability expert differ. Certainly, away from the theory, there is an interest in how usability testing can be undertaken in the field (Waterson, Landay, & Matthews, 2002).

The three most widely used, robust, easy-to-use UEMs are cognitive walkthrough (CW), heuristic evaluation (HE), and thinking aloud (TA) (Hertzum & Jacobsen, 2003). The thinking (or talking) aloud (TA) verbal protocol analysis has been widely used, for example, in health care Web site design (Zimmerman, Akerelrea, Buller, Hau, & Leblanc, 2003) and in software development (Hohmann, 2003). Roberts and Fels (2006) extend the think aloud protocol to include procedures to follow when the participant is deaf. Krahmer and Ummelen (2004) in their comparison of the think aloud protocols developed by Ericsson and Simon (1998) and Boren and Ramey (2000) argue that the correct use of a UEM is important for two reasons. First, if UEMs used in practice are not reliable and valid, it becomes difficult to compare and replicate studies and therefore to redesign or improve previous versions. However, this reason pales into significance with the second reason which is simply that if UEMs used in practice are not reliable and valid, it becomes difficult to identify problems signalled by the test user from those that might be evoked by the test setting or indeed other intervening factors. Given these implications for incorrect use of the protocols and a reported lack of understanding between the groups, we wanted to understand the knowledge transfer process better.

Complete Chapter List

Search this Book:
Reset