Study Design and Data Gathering Guide for Serious Games’ Evaluation

Study Design and Data Gathering Guide for Serious Games’ Evaluation

Jannicke Baalsrud Hauge (Bremer Institut für Produktion und Logistik (BIBA), Germany), Elizabeth Boyle (University of the West of Scotland, UK), Igor Mayer (Delft University of Technology, The Netherlands), Rob Nadolski (Open University of The Netherlands, The Netherlands), Johann C. K. H. Riedel (Nottingham University, UK), Pablo Moreno-Ger (Universidad Complutense de Madrid, Spain), Francesco Bellotti (Università degli Studi di Genova, Italy), Theodore Lim (Heriot-Watt University, UK) and James Ritchie (Heriot-Watt University, UK)
Copyright: © 2014 |Pages: 26
DOI: 10.4018/978-1-4666-4773-2.ch018
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

The objective of this chapter is to provide an overview of the different methods that can be used to evaluate the learning outcomes of serious games. These include Randomised Control Trials (RCT), quasi-experimental designs, and surveys. Case studies of a selection of serious games developed for use in higher education are then presented along with evaluations of these games. The evaluations illustrate the different evaluation methods, along with an assessment of how well the evaluation method performed. Finally, the chapter discusses the lessons learned and compares the experiences with the evaluation methods and their transferability to other games.
Chapter Preview
Top

Evaluation Methods For Sg Learning Outcomes

The evaluation of games is complex and multidimensional since it involves evaluation not just of whether there is an improvement in performance on the targeted learning outcomes, but also evaluation of the user acceptance of, engagement with, and satisfaction with the game. The introduction of a serious game into the curriculum raises similar issues to any other educational intervention, since the aim of a game is to improve performance on a specific learning outcome. Woolfson (2011) proposes a hierarchy of evidence for evaluating educational interventions:

  • 1.

    Meta-analyses

  • 2.

    Randomised controlled trials (RCT)

  • 3.

    Quasi-experimental designs

  • 4.

    Single case experimental designs – pre & post test

  • 5.

    Non experimental designs – surveys, correlational, qualitative

Meta-Analyses: At the top of the hierarchy of evidence for the effectiveness of interventions are meta-analyses. Meta-analysis combines the results from previous studies to identify patterns in research findings, especially with respect to whether games are effective methods in learning. Meta-analysis requires a reasonable number of empirical studies as input to compare – in serious games we still have a way to go to produce the needed studies, hence it has not been included in this chapter.

Randomised Control Trials (RCT): The Randomised Control Trial (RCT) is considered to be the gold standard for evaluating educational interventions. In a RCT participants are randomly allocated to an experimental (game) group or a control (non-game) group and their performance on the target skill/behaviour before and after the game intervention is tested. Ideally pre-testing should confirm no existing difference between the groups, while post-testing should show whether the experimental group performs better than the control group. Improvements in the target skill/behaviour for the experimental compared with the control group in a follow-up study would allow further confirmation that the intervention was successful.

Complete Chapter List

Search this Book:
Reset