Can Video Gameplay Improve Undergraduates’ Problem-Solving Skills?

In this study, the authors investigated if two distinct types of video gameplay improved undergraduates’ problem-solving skills. Two groups of student participants were recruited to play either a roleplaying video game (World of Warcraft; experimental group) or a brain-training video game (CogniFit; control group). Participants were measured on their problem-solving skills before and after 20 hours of video gameplay. Two measures were used to assess problem-solving skills for this study, the Tower of Hanoi and The PISA Problem Solving Test. The Tower of Hanoi measured the rule application component of problem-solving skills and the PISA Problem Solving test measured transfer of problem-solving skills from video gameplay to novel scenarios on the test. No significant differences were found between the two groups on either problem-solving measure. Implications for future studies on game-based learning are discussed.


INTRodUCTIoN
Video games are played by more than half of the U.S population and the video game industry generated $36 billion in 2018 (ESA, 2018).Given the popularity and success of the video game industry, gamebased scholars are exploring how well-designed video games can be used to improve a wide range of knowledge, skills, and abilities referred to as game-based learning (GBL).Proponents of GBL argue that well-designed video games are grounded by active participation and interaction as the focal point of the learner experience and can lead to changes in behavior and cognition (Ifenthaler, Eseryel, & Ge, 2012;Shute et al., 2019).Moreover, well-designed video games immerse players in environments that can provide a framework for learning experiences by promoting engagement and transfer from simulated worlds to the natural world (Dede, 2009).
Current American students are not receiving adequate exposure to authentic ill-structured problem-solving scenarios in their classrooms, and schools need to address the acquisition of problem-solving skills for students in the 21st century (Shute & Wang, 2016).American students trail their international counterparts in problem-solving skills on the Program for International Student Assessment (PISA) Problem Solving Test.Furthermore, American business leaders complain about recent college graduates' lack of problem-solving skills.Two surveys conducted by the Association of American Colleges and Universities of business leaders and students indicated that problem-solving skills are increasingly desirable for American employers, but only 38% of employers reported that recently hired American college graduates could analyze and solve complex problems while working (Hart Associates, 2018).
Researchers of video game studies find that gameplay can be positively associated with the improvement of problem-solving skills (Shute, Ventura, & Ke, 2015;Spires et al., 2011).However, current discourse in the field of gameplay and problem-solving skills centers primarily on descriptive research (Eseryel et al., 2014) which can be summarized based on the following premise: video games require players to solve problems, and over time, playing video games will lead to improved problemsolving skills (Hung & Van Eck, 2010).Descriptive research is important to argue that video games support problem-solving skills, but further empirical research is needed to demonstrate whether problem-solving skills are acquired through video gameplay.This research study addressed whether two distinct types of video gameplay empirically affects undergraduates' problem-solving skills.

Video Games and Problem-Solving Skills
According to Mayer and Wittrock's (2006) definition, problem solving includes four central characteristics: (1) occurs internally to the problem solver's cognitive system; (2) is a process that involves conceptualizing and manipulating knowledge; (3) is goal directed; and (4) is dependent on the knowledge and skills of the problem solver to establish the difficulty in which obstacles must be overcome to reach a solution.Unlike the well-structured problems that students face in formal learning settings, well-designed games provide students with challenging scenarios that promote problem-solving skills by requiring players to generate new knowledge from challenging scenarios within interactive environments, while also providing immersive gameplay that includes ongoing feedback for the players to hone their problem-solving skills over time (Van Eck, Shute, & Rieber, 2017).Rules govern video gameplay mechanics and one component of problem solving is the ability to apply existing rules in the problem space known as rule application (Shute et al., 2015).One example of a rule application is found in the well-researched problem-solving puzzle the Tower of Hanoi (Huyck & Kreivenas, 2018;Schiff & Vakil, 2015;TOH, 2019).The rule application component of problem-solving skill is one of the dependent variables in this study.Rule application refers to the problem-solver's representation of the problem space through direct action, which is critical to problem solving (Van Eck et al., 2017).

Video Gameplay and Transfer
Researchers contend that the hidden power of well-designed video games is their potential to address higher-level learning, like retention, transfer, and problem-solving skills (Gee, 2008;Shute & Wang, 2015).Retention is the ability to remember the presented information and correctly recall it when needed, while transfer is the ability to apply previously learned information in a novel situation (Stiller & Schworm, 2019).Possible outcomes of playing video games may include the improvement of collaborative problem-solving skills, confidence, and leadership skills that are transferable to the workforce environment.Recent research on video game training studies and transfer of cognitive and noncognitive skills indicates that gameplay is positively associated with the improvement of attention, problem-solving skills, persistence (Green & Bavelier, 2012;Rowe et al., 2011;Shute et al., 2015;Ventura et al., 2013), executive functions (Oei & Patterson, 2014), and hypothesis testing strategies (Spires et al., 2011).However, other researchers have found null effects of video gameplay and transfer of cognitive skills (Ackerman, et al., 2010;Baniqued, Kranz, et al., 2013;Boot et al., 2008).
A recent meta-analysis of brain-training interventions found that brain-training interventions can improve performance on trained tasks but there were fewer examples of interventions indicating improved performance on closely related tasks, and minimal evidence that training enhances performance on daily cognitive abilities (Simons et al., 2016).Among those finding null effects, questions were raised about the methodological shortcomings of video game training and transfer studies that are common pitfalls in experimental trials.Some of the pitfalls included failing to report full methods used in a study and lack of an effective active control condition that can expect to see similar improvement in competencies as the experimental group (Baniqued et al., 2013;Boot, 2015;Boot, Blakely & Simons, 2011).Unless researchers define recruitment methods for participants and their gaming expertise (novice vs. expert), as well as compare active control groups with experimental groups receiving equal training games, then differential improvement is indeterminable (Boot et al., 2013;Shute et al., 2015).The recruitment approach is outlined in the Method section.

Motivation for Selection of Games
The video games selected for this research study were based on the problem-solving skills players exercise and acquire through gameplay that were aligned with the problem-solving skills assessed on the external measures, the PISA Problem Solving Test and the Tower of Hanoi (TOH).Well-designed video games include sound learning principles embedded within gameplay such as requiring players to solve complex problems which can then be applied to other learning contexts (Lieberman et al., 2014).In this study, the authors examined the effects of playing World of Warcraft (Activision Blizzard, 2019) and CogniFit (CogniFit, 2019) for twenty hours on undergraduates' problem-solving skills (rule application and problem-solving transfer).The inclusion of CogniFit addresses a main concern of game-based research which is the lack of an active control condition to determine differential improvement (Boot et al., 2013).

Problem-Solving and Video Gameplay Model
The authors have identified observable in-game behaviors (i.e., indicators) during gameplay that provide evidence for each of the problem-solving processes on the PISA Problem Solving Test.The process included playing each video game extensively, checking community forums for solutions to the most challenging problems for each game, and viewing experts' gameplay video channel streams on YouTube.After generating a list of credible indicators, those selected were based on the following criteria: (a) relevance to the PISA problem solving levels of proficiency and (b) verifiable through gameplay mechanics.Examples of indicators for the PISA problem-solving processes for each game are listed in Tables 1 and 2. The purpose of developing the problem-solving behavior model is to operationalize the indicators of gameplay that align with the cognitive processes being assessed on the PISA test (i.e., Exploring and Understanding, Representing and Formulating).The PISA Problem Solving Test contains questions representing six levels of proficiency: Level 1 is the most limited form of problem-solving ability such as rule application (solving problems with simple rules or constraints) and Level 6 is the complex form of problem-solving ability (executing strategies and developing mental models to solve problems).The PISA test will determine whether there is transfer of problem-solving skills from video gameplay to novel scenarios.

world of warcraft
Massive multiplayer online role-playing games (MMORPGs) require players to manage resources, adapt playstyle to the environment, test new skills and abilities, identify and apply rules to solve problems as well as explore the story of the game through questing.MMORPGs like Warcraft provide gameplay experiences that are analogous to meaningful instruction by offering complex multifaceted problems that require model-based reasoning-understanding interrelated components of a system, and feedback mechanisms among the components to find the best solutions to problems that arise using available tools and resources in a given environment (Chinn & Malhotra, 2002;Steinkuehler & Chmiel, 2006).Therefore, if MMORPGs provide an authentic sense of inquiry into solving problems through gameplay, then it is worth testing whether these gameplay experiences transfer to novel problem-solving scenarios.
One specific example of transfer from gameplay in the MMORPG Warcraft to a natural context concerns the problem of reducing travel time.When players enter the game environment, they must account for extended travel time between different activities such as exploration, questing, and combat.To solve this problem, players are given a tool that can be accessed on their user interface by pressing (M) on their keyboard, which opens the map.Listed on the map are designated flight paths (FPs) that act as a taxi service for players.The image in Figure 1 indicates the various FPs a player has unlocked on their world map as well as those that have yet to be discovered (Activision Blizzard, 2019).The flight path is a handy tool because it connects the goal of completing quests as soon as possible to earn rewards with the knowledge that using flight paths greatly reduces travel time between quests.Greatly reducing travel time results in a more efficient way to complete many of the sub goals in the game, and as noted by Shute and Wang (2016) the use of tools and resources efficiently is an important part of problem solving during gameplay.

PISA Problem Solving Process
Examples of Indicators

Exploring and Understanding
Prioritize skills and spells that are purchased from vendors in the spell book and action bars; Complete the initial combat introductory quest; Interact with the flight path tool

Representing and Formulating
Use models and charts to assess class and role performance; Analyze pros and cons of equipping awarded weapons and armor in relation to performance

Planning and Executing
Rearrange spells and abilities on the action bar after combat testing (which spells or abilities should be used together and in combination with each other); After combat, prioritize quests and abilities with enemies that can be defeated alone or in groups

Monitoring and Reflecting
Adjust combat distance (short, medium, long) to enemies after testing skills and abilities; Explore the environment for progression; Reorder action bar as new skills are acquired; Use flight path tool to reduce travel time Now, consider one of the questions being assessed on an external measure in the study, the PISA Problem Solving Test.Individuals are given a map that shows the roads between each city, a partially filled-in key that shows distances between cities in kilometers, and the overall layout of the area.The purpose of this question is to assess how individuals calculate the shortest distance from one city to another.To solve the problem, individuals are required to calculate the distance between the two cities of Nuben and Kado using the resources available.This is the same kind of problem that Warcraft players experience during gameplay when travelling between locations to complete quests.Both problem scenarios share the same overlapping components, the ability of the problem solver to use given tools and resources efficiently to find the most direct route that reduces travel time between two separate locations.Figure 2 illustrates this problem scenario on the PISA test (OECD, 2003).

CogniFit
The brain training game CogniFit claims to have developed a patented system that measures, trains, and monitors cognitive skills like rule application, attention, memory, and visual perception and their relation to neurological pathologies.According to the CogniFit (2019) website the company states there are transfer effects from their mini games to problem solving in the natural world.The brain training game is selected as an active control condition based on this claim as well as repeated practice of rule application embedded into the gameplay experience.
One example of rule application in the brain training game CogniFit occurs in the mini-game Gem Breaker 3D.This mini-game requires players to direct a paddle back and forth across the screen to bounce a ball off the paddle that breaks the gem blocks without letting the ball touch the bottom of the screen.The initial tutorial informs players that improvement of their hand-eye coordination and processing speed skills are emphasized through gameplay with over 100 levels available to master.Feedback is provided to players with a score for each level showing where they can improve.Once all gem blocks are broken the level is completed and a new level begins.However, each player only has access to 4 balls for each level, and if they lose, the game reverts to the beginning.The tutorial shows players how to use the mouse to control the paddle back and forth across the screen while the spacebar launches the ball.Once a gem is broken there is a chance for a power-up to be gained such as shooting multiple balls, explosives, missiles, side quests or power-ups.Figure 3 illustrates the rules of the mini-game in Gem Breaker 3D (CogniFit, 2019).
Rule application occurs when playing the TOH and requires one to move an entire stack of disks (i.e., a number between 3 and 8) of varied sizes from one of three rods to another.While playing, players are constrained by the following rules: (1) only one disk can be moved at a time; (2) no disk can be placed on a smaller one; (3) only the uppermost disk can be moved on a stack.Rule application is demonstrated by the problem solver in the TOH by configuring the disks and the rods to reach a solution in the problem space.By configuring the disks onto the rods, each move of a disk indicates the problem solver attempting to creatively apply the rules, which is vital to problem solving (Shute et al., 2019).Figure 4 illustrates the problem space in an online version of the TOH (2019).

Rationale
Both video games require players to apply rules to solve problems and rule application is a component of problem solving (Van Eck et al., 2017).As an example, Warcraft players learn that they can only cast certain spells in combat while standing still or that eating and drinking food while sitting down hastens the regeneration of health.Similarly, when playing the mini-game Gem Breaker 3D in CogniFit players use a paddle and a ball to break bricks.One of the first rules players encounter in the game is that they can only move the paddle left or right across the screen or that bonus bricks have special effects like increasing ball speed.The rules are more explicit in CogniFit than Warcraft so brain-training gameplay may promote better performance on solving the TOH.Each move with the paddle and ball is an example of applying the rules, and this is frequently done during gameplay in CogniFit.
However, CogniFit mini-games lack some of the salient gameplay features in Warcraft such as roleplaying gameplay, meaningful interactions with other players, and richly designed problem spaces that GBL scholars suggest are important to the transfer of problem-solving skills from video gameplay to novel contexts measured on the PISA Problem Solving Test.Warcraft gameplay provides players with repeated practice to solve authentic ill-structured problems in rich detailed problem-solving scenarios that may be better suited for transfer to novel scenarios on the test.

Research Questions
After describing the video gameplay conditions of Warcraft and CogniFit as well as reviewing the literature on problem-solving skills, the authors seek to answer the following research questions: 1. Is there a change, from pretest to posttest, on the rule-application component of problem solving, after 20 hours of video gameplay, on either a role playing or brain-training video game? 2. Does an immersive, collaborative role-playing video game promote transfer of problem-solving skills to novel scenarios better than a brain-training video game for undergraduates after 20 hours of video gameplay?

Measures
The independent variable in this research study is the video game with two levels: a roleplaying video game (Warcraft) and a brain-training video game (CogniFit).The video games provide players with repeated problem-solving scenarios requiring players to engage in problem-solving processes.The dependent variable measured for this study is problem-solving skill.One measure assessed the component of rule application of problem solving to solve a puzzle which is the TOH.The second measure assessed problem-solving in novel scenarios which is the PISA Problem Solving Test.Both groups were assessed on the TOH and the PISA Problem Solving Test.The TOH was used to assess research question 1 and the PISA Problem Solving Test was used to assess research question 2.

The Tower of Hanoi
Recall, the TOH is a valid and reliable experimental paradigm that can be used to assess rule application, problem solving and transfer (Huyck & Kreivenas, 2018;Schiff & Vakil, 2015).Rule application is demonstrated by the problem solver in the TOH by configuring the disks and the rods to reach a solution in the problem space.By configuring the disks on to the rods, each move of a disk indicates the problem solver attempting to creatively apply the rules.Participants played the TOH on a computer from a free website online.The test score (i.e., lower scores are better) for completing the TOH can range anywhere from 31 (which is the minimal number of moves to execute) until it is solved.

PISA Problem Solving Test
The second external problem-solving measure in this study is the (2003) version of the PISA Problem Solving Test.The PISA Problem Solving Test (OECD, 2003) contains 10 novel problem-solving scenarios, and within each scenario there is a range of one to three different questions that must be solved.There are 19 total questions on the test across all scenarios that required students to solve problems.For this study, participants completed five novel problem-solving scenarios for the pretest and the remaining five novel problem-solving scenarios for the posttest.The levels of proficiency for each question are randomized across all problem-solving scenarios.Each problem-solving scenario is independent from one another and each of the 19 questions across all scenarios being assessed in this study are isomorphic from the questions that were implemented in 2003.The scoring for most questions was either correct or incorrect, with some questions allowing for partially correct answers.
Participants that answered each question correctly were awarded one point, while partially correct answers awarded participants a half-point.

Procedure
Participants for this study were recruited via flyers posted publicly on campus and dormitory bulletin boards.Over the course of eight weeks, participants engaged in 10 gameplay sessions that lasted two hours each.Participants had the opportunity to complete these 10 sessions in two-hour time-blocks that were made available Monday through Friday for eight consecutive weeks.Participants completed the experiment in a classroom lab on campus at the university.In this experiment, student participants were randomly assigned to play one of two video games.Participants in the experimental condition played the popular roleplaying video game Warcraft that promotes learning new terminologies, mastering interrelated skills and abilities, applying rules to solve problems, goal setting, and reflecting on progress.In addition, participants in the active control condition played the brain-training video game CogniFit (2019).The video game allows players to select various mini-games including Gem Breaker 3D that may enhance cognitive abilities including rule application, memory, and focus.Student participants in this study were guided by discovery learning and provided with in-game tutorials for each condition while learning to solve problems through active exploration, interacting with the game environment and self-direction (Westera, 2019).
At pre-test and post-test participants had 20 minutes to complete isomorphic versions of the TOH as many times as possible.All participants successfully completed the TOH once during the pretest and once during the posttest.At pre-test and post-test, participants also had 20 minutes to complete as many questions as possible on The PISA Problem Solving Test.The pretest required participants to answer nine questions and the posttest required participants to answer 10 questions from multiple problem-based scenarios.Each problem-based scenario was unique, and some examples included the following: (1) calculating the distance between two points given a map; (2) developing a decision tree diagram of a library loan system; and (3) calculating daily energy needs for an individual given a set menu.

data Structure and Analyses
The full dataset used for all analyses to be presented, contained data from 34 participants.All participants attempted three parallel, computerized forms of the TOH at baseline and at the end of the intervention.Due to the nature of the task's programming, if participants did not complete a TOH task, the total number of moves attempted was not output to the data file.This will be expanded upon in the results section by utilizing three analyses which included an independent t-test comparing the mean number of incomplete TOH games between the groups, an independent t-test comparing the mean gain score of TOH between the groups, and a multiple linear regression predicting max gain score of TOH by group, by gain score count, and by group, gain score count, and PISA gain.All analyses in sections below were completed in R, version 3.4.3.Packages used for data analysis include: dplyr, for data wrangling (Wickham et al., 2019), and ggplot2 for visualizations (Wickham, 2016), and MASS for stepwise regression analyses (Venables & Ripley, 2002).

Assessing Group differences in Completion
Although groups differed on the overall number of incomplete TOH sessions at pre-testing (N COGNITIVE = 13; N GAMING = 8), an independent t-test of the average number of incomplete games by group, was not significant (p > .05).Furthermore, an independent t-test revealed no group differences for the overall number of incomplete TOH sessions at post-testing (N COGNITIVE = 3; N GAMING = 2; p > .05).A repeated-measures ANOVA revealed a significant time effect, F(1,32) = 13.386,p<.001.However, group, F(1,32) = 1.609, p=.214, nor group by time interaction were significant, F(1,32)=.837,p=.367.On average, participants completed an additional half TOH session (i.e., .47,SD = .53)after receiving either training package (M Pre = .62,SD = .70;M Post = .15,SD = .36).Table 3 shows the means and standard deviations for the pretest and posttest scores participants completed in the experimental (Warcraft) and control (CogniFit) groups.The mean scores in the table indicate how many moves on average each participant could successfully solve the puzzle per group.For this study, participants had 20 minutes to complete as many questions as possible for the pretest and 20 minutes to do the same for an isomorphic version of the posttest.Table 4 shows the means and standard deviations for the PISA pretest and posttest scores of participants in the experimental (Warcraft) and control (CogniFit) groups.

Quantifying Improvement in Performance
In order to quantify improvement after the intervention, gain scores were calculated by the following formula, for each instance of the TOH task encountered (i.e. three sessions): Gain Score = Number of moves POST -Number of moves PRE Gain scores produced from this calculation can be interpreted as follows: negative gain scores indicating improvement (fewer total moves at post-testing), and positive gain scores indicating a decrement in performance (more total moves at post-testing).As a result of incomplete games not producing the number of moves, for some participants, no gain score calculation was possible.At pretesting, the cognitive training group had three missing gain scores for the second TOH and 10 for the third TOH whereas the game training group had one missing gain score for the second TOH and seven for the third TOH.To account for this, when calculating average gain scores for each participant, averages were weighted by the number of completed games (i.e.averaging by the number of incomplete sessions would result in an undefined calculation, as some participants completed all sessions).Table 5 shows the results of an unpaired t-test on the average weighted gain scores found no group differences in TOH gain scores (p > .05).Additionally, an unpaired t-test on the average PISA gain scores found no group differences gain scores (p > .05).

Sensitivity Analysis
Due to missing data issues discussed above, the final analysis involves a stepwise multiple linear regression (forward and backward; AIC used for final model variable selection conducted using R package MASS, function stepAIC; Venables & Ripley, 2002), predicting max gain score (max of all three potential gain scores) by group membership (WoW or Cognitive Training), total gain score count, and a gain score derived from pre and post measurements on the PISA task (2003).Based on the stepwise regression procedure analysis results in Table 6, the best fitting, significant, multiple regression model was found to be a model predicting max gain score from gain score count (no Gain score count was a significant predictor of max gain score (t=3.796;p < 0.001), indicating potential practice effects from repeated exposure to the task.Practice effects will be discussed in subsequent sections.

CoNCLUSIoN evidence for Research Question 1
The initial hypothesis regarding the first question was that a brain-training game would help participants improve their rule application component of problem-solving skill better than a roleplaying game after 20 hours of gameplay for several reasons.One reason is that the rules are more explicit during brain-training gameplay and because of claims made by CogniFit that brain-training gameplay will improve its users' brain fitness or ability to rely on more than one problem-solving strategy.While both games require players to apply rules to solve problems, only CogniFit markets its product as a tool that can help users to solve problems in their daily lives (CogniFit, 2019).This claim also suggests that brain-training gameplay can help users transfer skills learned in-game to novel problem-solving scenarios in the natural world.However, the results indicated that there was no significant difference in gain scores (i.e., in Post -Pre Gain scores) in terms of TOH performance (t-test comparing gain scores: p = .746)between the two gaming conditions (i.e., Warcraft and CogniFit), though both groups improved from baseline to post-testing assessment, likely attributable to practice effects (see Figure 5).Overall, the results contradicted our initial hypothesis for Research Question 1; implications are discussed next.

Implications of Results for Research Question 1
Solving problems in an immersive game like Warcraft provided players with repeated practice of applying rules and using tools to find creative solutions to similar but varied problems.As players reflected on their choices, they learned how to use the tools by analyzing givens and constraints in unison to achieve maximum character performance and develop optimal solutions to general problems.CogniFit players did not experience immersive gameplay, but instead repeated problem-solving scenarios that were varied but required fewer tools and resources to be solved.Once CogniFit players knew how to use the paddle and the ball in unison, the only additional resources to use during gameplay were power-ups, bonus bricks, and traps.Roleplaying gameplay required players to solve problems using additional tools and resources efficiently which was a more complex task than using the ball and paddle during brain-training gameplay.Strategizing when and how to apply rules through varied but different problem scenarios with multiple tools and resources through immersive gameplay was beneficial for Warcraft participants.Moreover, players in Warcraft could receive feedback with help from other players learning when and how to apply tools and resources to solve problems.CogniFit players received feedback at the end of each level with an overall score and corrected mistakes through trial and error without additional support.

evidence for Research Question 2
The initial hypothesis regarding the second question was that training on an immersive, collaborative roleplaying video game for 20 hours would engender transfer of problem-solving skills to novel problem-solving scenarios on the PISA Problem Solving Test better than a brain-training video game.
One reason is that research on MMORPGs including Warcraft indicates that players co-constructed knowledge by challenging and supporting novel ideas to in-game problem-solving scenarios through online discussion forums as well as discovering optimal solutions to in-game problems by combining multiple abilities and resources available to players (Chinn & Malhotra, 2002;Steinkuehler & Chmiel, 2006).Efficiently using tools and resources is a component of problem solving and is central to the roleplaying gameplay experience (Shute & Wang, 2016).However, the results indicated that after 20 hours of gameplay of Warcraft or CogniFit there was no improved performance on the PISA (i.e., comparing PISA Gain Scores; p = .748).Overall, the mean scores for Warcraft participants were slightly better than CogniFit participants on the isomorphic versions of the PISA Problem Solving pretest and posttest -indicating baseline differences between the two groups in terms of performance.Overall, there were no significant differences found between roleplaying and brain-training gameplay on transfer of problem-solving skills (see Figure 6).The implications for the results from research question 2 are discussed next.

Implications of Results for Research Question 2
Given that both video game training and "brain-training" did not significantly improve problem-solving skills has several implications.The gameplay behaviors exhibited by players in each condition were aligned with the problem-solving processes on the PISA Problem Solving Test.However, possible reasons for lack of transfer in this study in addition to small sample size include (a) collaborative, immersive roleplaying gameplay may help promote problem-solving skills related to in-game problem solving scenarios but not necessarily to improved performance on external problem-solving assessments, and (b) problem-solving during Warcraft gameplay may be too domain specific to transfer to novel problem-solving scenarios on the PISA Problem Solving Test.
The misalignment between the problem-solving domains of Warcraft and the PISA Problem Solving Test could have hindered the possibility of finding a transfer effect.As an example, Warcraft players must learn how to navigate an immersive environment, use complex tools efficiently and effectively to solve problems during gameplay and interact with both the environment and other characters to solve problems.However, solving problems on the PISA Problem Solving Test is not an immersive experience.It was also a solitary activity; participants did not collaborate or interact with each other while taking the test.The OECD designed the PISA Problem Solving Test to cover more general problem-solving skills to complement domain-specific skills (Greiff et al., 2014).Selecting a problem-solving assessment which is embedded within an immersive environment that requires players to engage in collaborative problem-solving processes (i.e.experienced in video gameplay) using tools and resources efficiently could have been a more viable assessment to measure transfer of problem-solving skills in this study.Further research is still warranted to determine if video gameplay can promote transfer of problem-solving skills to novel scenarios.The limitations of this research study are addressed in the next section.

Limitations
Given time and resource constraints, the sample size of this study is small and lacks statistical significance to make claims regarding the general population.With more available resources, recruitment would have likely continued for an additional semester to raise the sample size for the study.Students that did not complete the study cited time constraints as the main reason they were unable to fulfill the 20 hours of video gameplay requirement.The optimal time to run the study would have been during Fall and Spring semesters instead of Spring and Summer.In Fall and Spring, more students would have been available for recruitment as well as increased scheduling flexibility and time to complete the intervention during the academic year for the participants.Given that the authors monitored participants during video gameplay in case any problems arose, there may have been expectancy effects that impacted participants.For example, participants' gameplay experiences may have been negatively or positively affected when being monitored.The potential for participants to alter their behavior simply because they are being studied is known as the Hawthorne Effect (Benedetti, Carlino & Piedimonte, 2016).In addition, the inclusion of a more immersive assessment that measures problem-solving skill transfer could have led to improved outcomes when compared to a more traditional assessment like the PISA Problem-Solving Test (2003).

Future Implications
The main goal of this study was to examine the impact of two distinct types of video gameplay; role playing (Warcraft) and brain-training (CogniFit) on problem-solving skills for undergraduates.Specifically, if video gameplay can improve the rule application component of problem solving and whether problem solving during gameplay transferred to novel problem-solving scenarios.This study addressed some of the methodological shortcomings found in previous video game training and transfer studies that failed to report recruitment methods, define study variables, and provide an active control group in which participants could expect receive equal improvement from competencies (Baniqued et al., 2013;Boot et al., 2013).As a result, possible placebo effects are likely mitigated in this experiment improving upon methodological pitfalls affecting other video game training studies (Anderson et al., 2010;Ferguson & Kilburn, 2009).
The results from this study suggest that neither a commercially available video game (Warcraft) or a commercially available "brain-training" package (CogniFit) resulted in improvements in the rule-based component of problem solving (as assessed by the TOH puzzle).Moreover, aside from a lack of improvement in the rule-based component, 20-hours of training did not promote transfer of problem-solving skills to novel scenarios (as assessed by the PISA Problem Solving Task), which is consistent with similar research findings on cognitive training and transfer (Souders et al., 2017).Sensitivity analyses conducted found evidence for practice effects in gain scores, illustrating that rather than improvement due to the training packages, improvement seems related to multiple, closely spaced assessments.Future research can complement this study by increasing the sample size and testing similar immersive well-designed video games on participant knowledge, skills, and abilities, in addition to directly cuing participants to be aware of the strategies (i.e., perceptual and cognitive strategies) they might carry with them from the digital world to the real-world.

ACKNowLedGMeNT
Nelson Roque was supported by National Institute on Aging Grant T32 AG049676 to The Pennsylvania State University.

Figure 1 .
Figure 1.Player map listing flight path locations in World of Warcraft (2019)

Figure 2 .
Figure 2. Problem scenario for planning the best route for a trip from PISA (2003)

Figure 3 .
Figure 3. Rules for the mini-game Gem Breaker 3D listed in the initial tutorial (2019)

Figure 4 .
Figure 4. Problem space in an online version of the Tower of Hanoi puzzle with 5 disks (2019)

Figure 5 .
Figure 5. Average number of moves in the Tower of Hanoi task across (up to 3) sessions per person, per timepoint.The left panel represents scores for the CogniFit (COG) group, and the right panel represents scores for the Warcraft (WOW) group.

Figure 6 .
Figure 6.PISA Scores before and after the intervention.The left panel represents scores for the COG group, and the right panel represents scores for the WOW group.

Table 2 . Examples of indicators for each PISA problem-solving process in CogniFit Problem Solving Process Examples of Indicators
Exploring and UnderstandingBreak bricks with the ball and paddle by pressing the space bar and mouse; Avoid letting the ball fall to the bottom of the screen; Use powerups to fire missiles, increase ball speed, or add extra ballsRepresenting and FormulatingIdentify special blocks for bonuses; Test and use missiles to find optimal conditions; Select appropriate powerups based on gem locations on screen in relation to paddle and ballPlanning and ExecutingUnlock new paddle and ball abilities after completing each level; Once the ball is released, plan a solution pathway to eliminate all bricks that can work for each level beginning by angling the paddle to direct the ball in the desired directionMonitoring and ReflectingAvoid traps and negative powerups; Use missiles under optimal conditions after testing; Save long paddle powerup as ball speed increases

Table 6 . Stepwise regression model path, analysis of deviance table and the row with the best fitting model, using AIC as criterion, is highlighted in gray
Participants predicted max gain score is equal, where gain score is in the unit of number of moves.Max gain score increased by 48.87 for every one unit increase in gain score count (more gain scores, closer to 0; less improvement after the intervention).