Article Preview
TopIntroduction
Expertise studies have long been an important tool for investigating how individuals develop skills in a given domain. A common approach for studying skill acquisition is to examine individuals who are identified as being at the “top” of their field (Bloom, 1985). However, gaining a clear sense of how an expert accomplishes a task is a challenge. Experts are not always consciously aware of all the knowledge and skills they use within their domain, making self-reports of expert technique somewhat unreliable (Feldon, 2007; Sullivan, Yates, Inaba, Lam & Clark, 2014). Eye-tracking is a methodology that has been used to examine expertise with regard to visual processing in a variety of domains such as medicine (Kundel, Nodine, Conant & Weinstein, 2007), games (Almeida, Veloso, Roque & Mealha, 2011), sports (North, Williams, Hodges, Ward & Ericsson, 2009) and aviation (Kasarkis, Stehwien, Hickox & Aretz, 2001). As eye-tracking demonstrates exactly where participants are focusing, it eliminates some of the ambiguity of what experts are actually doing at a given moment in time. In games research, proprietary in-game metrics of performance give another quantitative measure of an individual’s performance relative to other players. Often, these metrics include complex algorithms that encompass far more than just a measure of time played since total experience is a component of, but not necessarily a sufficient indicator for, expert performance (Ericsson & Lehmann, 1996; Gegenfurtner, Lehtinen & Saljo, 2011; Feltovich, Prietula & Ericsson, 2006). Utilizing methods that are not solely based on self-reports, but also include objective measures of skill may aid in better understanding how expertise develops (Tan, Leong & Shen, 2014). Such a mixed-method approach, as applied in this study, involving game metrics, eye-tracking and in-person interviews, can provide a range of measures through which to study expertise.
Games have long been used to study skill acquisition in performance among novices and experts. Early work by Chase and Simon (1973) and de Groot (1978), in chess, laid the groundwork for games and expertise studies by demonstrating novice-expert differences in a highly trackable environment. A commonly utilized definition of expert performance was proposed by Ericsson and Lehman (1996), “as consistently superior performance on a specified set of representative tasks for a domain,” and that is how we will operationally use the definition here. Ericsson and Smith (1991) highlighted the need to design studies that observe experts in a controlled setting, analyze their cognitive processes, and propose explicit learning mechanisms for skill acquisition. The introduction of computerized games, both chess and others, enables us to more specifically capture some of the elements Ericsson suggests and thereby better understand expertise within complex systems. In addition, since 97% of teens and 49% of adults play video games (Duggan, 2015; Lenhart, Kahne, Middaugh, Macgill, et al., 2008), research in this area is socially and developmentally relevant (Greenfield, DeWinstanley, Kilpatrick, & Kaye, 1994). Video games provide a fertile environment in which to study the development of expertise in a domain that has significant cultural capital. Greenfield et al. (1994) have argued that video games provide an excellent platform for studying skill acquisition because they involve goal-oriented behavior as well as instantaneous feedback.