Article Preview
TopTheoretical Framework
The potential of video games to support science learning is generally agreed upon (Gee, 2007; Mayo, 2009; Squire et al., 2003), but the analysis and structuring of evidence for game-based learning remains a challenge. This, in turn, has supported a mixed view of the effectiveness of games as tools for learning (Foster & Mishra, 2008; O’Neil, Wainess, & Baker, 2005). We believe, however, that this conclusion may be premature. The past fifteen years have seen great advances both in the sophistication of game designs and also in the supporting technology; there simply has not been enough time for a commensurate evolution in appropriate research methods. One central methodological difficulty involves capturing and measuring game-induced learning, which tends to be strongly situated within the game context, in out-of-game contexts such as post-tests. More advanced game designs compound this problem by supporting complex player actions that are challenging for learners to summarize and express, difficult for instruments to reliably capture, and resistant to conventional analytical methods. In addition, the use of formal assessments alongside games can compromise a game’s capacity for engagement and immersion, thus potentially reducing the efficacy of both the learning experience and the assessment.
The use of assessments of learning which reside outside a game used to measure learning that happens inside a game presents issues and vulnerabilities that merit careful consideration. Assessment is, after all, not a neutral activity. All assessments carry assumptions about the nature of learning, the nature of knowledge, and the purpose of assessment itself (Willis, 1993). The action of assessment places premiums on certain forms of knowing and understanding while de-emphasizing others. In the case of games for learning science, for example, an assessment may privilege declarative forms of knowledge, e.g. definitions and abstract principles, while the game itself might be more productive in reinforcing tacit knowledge or qualitative understanding of relationships. This insight becomes even more salient given the contrast between different types of games for learning: those in which the curriculum concepts are embedded in the game environment in a manner such that the game environment is structured mainly as context (“conceptually-embedded” games) and those in which the material to be learned is integrated into the core game-play mechanics with which the player is in constant interaction (“conceptually-integrated” games) (Martinez-Garza, Clark, & Nelson, 2012). It follows that these two kinds of games would favor different assessment strategies, given the differences in how they engage the learner, how they gauge success in the game, and how they represent knowledge. These nuances are not necessarily well captured by traditional assessments of learning, which traditionally favor summative declarations of concepts, articulated in discipline-specific forms and language (Sutton, 1996; Fang, Lamme & Pringle, 2010).