Gameful interventions (including serious games and gamification) are a popular tool to motivate and engage users towards improved behavioural outcomes. However, such interventions often fail due to poor design, specifically due to a fundamental lack of understanding of the audience and the required behavioural outcomes, and the consequent uninformed selection of potentially inappropriate game elements by designers. This chapter describes exploring the behaviour change wheel (BCW) method as a tool to augment gameful intervention design and selecting appropriate game elements to action gameful intervention strategies. This exploration is undertaken in the context of developing a gameful intervention targeted toward energy conservation. Within this context, the BCW is shown to assist the designers in understanding the audience and the intervention's behavioural outcomes, which has led to a theoretically informed and rigorous selection of game elements that better support the achievement of the targeted behavioural outcomes.
TopIntroduction
Gameful interventions (including gamification and serious games) employ game elements such as challenges, narrative, goals, and badges to reward and incentivise players and engage them in a playful way to prolong a desired target behaviour. As such, gameful interventions are a form of persuasive technology that can serve as a powerful tool for encouraging positive behaviour in fields as diverse as defence, health, education, and corporate training (Larson, 2020; Sipiyaruk et al., 2019; Rapp 2019). The market for gameful interventions is predicted to register an impressive growth of 32% to reach $40 billion by 2024 (TechSci Research, 2019; Xi & Hamari 2019). Yet, despite the proliferation of games in many fields, the landscape of gameful technologies designed to persuade users to change is “riddled with the carcasses of failed projects” (Fogg, 2009). Research on gamification and serious games still faces a variety of empirical and theoretical challenges to understand this contradiction (Rapp, 2019; de Salas et al., 2022).
Gameful interventions have been reported to improve student learning and attitudes (Bodnar et al., 2016; Chapman & Rich, 2018) and the productivity of disengaged employees (Oprescu et al., 2014), increase user compliance with health interventions (Sardi et al., 2017), and support the uptake of pro-environmental behaviours (Medema, et al., 2019). However, Rapp (2019), in leading a special issue on gamification research, found that most studies are often not evaluated empirically. De Salas et al.’s 2022 systematic review of gameful interventions targeted toward environmental outcomes confirms this finding, indicating that in their review 17% of studies reporting on gameful interventions included no evaluation, while the remaining interventions were evaluated for a vast range of outcomes, and so contributed to a lack of comparability across studies. Rapp (2019) further noted that those scarce empirical studies typically focus narrowly on evaluating and understanding individuals’ short-term interactions with the system, ignoring more difficult-to-measure outcomes.
It stands to reason then that these reportedly ‘successful’ implementations must be more rigorously explored with regards to their design, their evaluation, and their impact, and that gameful studies would benefit from wider use of theories to account for the complexity of human behaviour (Rapp, 2019; Derksen, et al., 2020; de Salas et al., 2022). Indeed, Gartner Research asserts that 80% of gameful interventions fail in achieving their outcomes primarily due to poor design (Burke, 2014; Rapp 2019). Recent reviews of gameful interventions highlight that specific to this poor design is the lack of behavioural insight built into the design of these interventions, as many gameful interventions did not undertake to understand existing behaviour prior to the game’s design or test the likelihood of a game-changing an identified behaviour (Rapp, 2019; Derksen, 2020; de Salas et al., 2022). Indeed Purwandari.et al., (2019) and Ferreira-Brito, et al., (2019) note that the justification for many of the gameful interventions included in their systematic reviews were merely “because others had used games in the past”, and that they were perceived as cost-effective and readily available, although no exploration or substantiation of these claims was made.
Furthermore, existing gameful systematic reviews show no evidence that the selection of game elements was mapped to evidence-based behaviour change techniques to ensure that these would serve as ‘active ingredients’ in the intervention and achieve the targeted behaviour (Manzano-León, et al., 2021; Ávila-Pesántez, et al., 2017; Ferreira-Brito, et al., 2019; Lopes, et al., 2019; de Salas et al., 2022). As such, designers persist in selecting those elements most obvious and easy to implement, such as points, badges, and leaderboards (Rapp, 2019; Valencia, 2019; Ferreira-Brito, et al., 2019), rather than those that have been mapped as likely to bring about a specific and targeted behavioural outcome (de Salas et al., 2022).