Comparing Two Playability Heuristic Sets with Expert Review Method: A Case Study of Mobile Game Evaluation

Comparing Two Playability Heuristic Sets with Expert Review Method: A Case Study of Mobile Game Evaluation

Janne Paavilainen, Hannu Korhonen, Hannamari Saarenpää
Copyright: © 2012 |Pages: 24
DOI: 10.4018/978-1-60960-774-6.ch002
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The expert review method is a widely adopted usability inspection method for evaluating productivity software. Recently, there has been increasing interest to apply this method for the evaluation of video games, as well. In order to use the method effectively, there need to be playability heuristics that take into account the characteristics of video games. There are several playability heuristic sets available, but they are substantially different, and they have not been compared to discover their strengths and weaknesses in game evaluations. In this chapter, we report on a study comparing two playability heuristic sets in evaluating the playability of a video game. The results indicate that the heuristics can assist inspectors in evaluating both the user interface and the gameplay aspects of the game. However, playability heuristics need to be developed further before they can be utilized by the practitioners. Especially, the clarity and comprehensibility of the heuristics need to be improved, and the optimal number of heuristics is still open.
Chapter Preview
Top

Introduction

Competition in the game industry is hard and the gaming experience has become a crucial factor in differentiating similar kinds of game titles. If a game is not enjoyable to play, players can easily switch to another game. Typically, gaming experience can be evaluated after there is a working prototype implemented and it is ready for beta testing. At this point, correcting any playability problems (e.g. UI navigation is complex, goals are not clear, or the challenge level or pace is set incorrectly) is often too expensive, or the project schedule does not allow any delays due to marketing reasons. As a result, there is a need for an evaluation method that can identify these playability problems before beta testing starts and thus provide time for corrections.

Productivity software has been evaluated for years with the expert review method to find usability problems in the design and implementation (Nielsen and Molich, 1990). In an expert review method, a small group of experts evaluate a product based on a set of heuristics. Heuristics are guidelines, rule of thumb statements, which reflect the desirable aspects of a given product. The method is cost-efficient and effective, and the design can be evaluated already in early project stages. A skillful and knowledgeable usability expert can identify usability problems as accurately as in user testing (Molich and Dumas, 2008). Evaluating games with this method is a tempting idea, but traditional usability heuristics cannot be applied directly (Federoff, 2002; Desurvire et al., 2004; Korhonen and Koivisto, 2006).

The design objectives between productivity software and games are different, and the evaluation methods need to recognize this divergence as well before they can be effectively applied to the domain of games. Pagulayan et al. (2008) describe these differences, and according to them, productivity software is a tool and the design intention is to make tasks easier, more efficient, less error-prone, and increase the quality of the results. Games, instead, are intended to be pleasurable to play and sufficiently challenging (Pagulayan et al., 2008). Because of these differences, a set of specifically designed heuristics are needed when video games are evaluated with the expert review method.

Playability has been studied very little by game researchers and HCI researchers. The research community is lacking a commonly agreed upon definition for playability, which would describe important issues influencing the game experience and guiding the research work. Egenfield-Nielsen et al. (2008) state that a game has good playability when it is easy to use, fun and challenging. Järvinen et al. (2002) have defined playability as an evaluation tool which consists of four components: 1) functional, 2) structural, 3) audiovisual, and 4) social playability. These components can be used to evaluate both the formal and the informal aspects of a game. Fabricatore et al. (2002) have defined playability in action games as the possibility of understanding and controlling gameplay. In addition, they state that poor playability cannot be balanced or replaced by non-functional aspects of the design. According to usability glossary1, playability is affected by the quality of different aspects, including storyline, controls, pace, and usability. Along with the academia, the game industry has also approached the issue of playability from the practical perspective. For example, Games User Research at Microsoft Game Studios has published several empirical papers considering usability, playability and user experience in video games2.

Complete Chapter List

Search this Book:
Reset