Quantification of Game AI Performance for Junior Leadership Training in the Defence Domain

Quantification of Game AI Performance for Junior Leadership Training in the Defence Domain

Michael Barlow, Edward Rowlands
DOI: 10.4018/978-1-4666-0149-9.ch057
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter describes an academic and rigorous evaluation of the utility and current short-comings of state-of-the-art game AI to support junior leadership training outcomes in the defence domain. The chapter describes the design and implementation of a number of section-level (9 soldiers, one of which is the junior leader – typically a corporal) scenarios in the serious-game/military-simulation known as VBS (Virtual Battlespace 2). A number of objective experiments are conducted to quantify the utility of AI for junior leadership training. A suite of performance metrics were implemented using VBS2’s scripting capabilities. These metrics included such scorings as loss-exchange-ratios, number of rounds expended, time to complete mission, distribution (by role) of casualties within the section, et cetera.
Chapter Preview
Top

Introduction

A series of experimental runs (employing over 40 participants from the Australian Defence Force) were conducted where the control of members of the section was varied: all members of the section were controlled by human players; only the section leader was controlled by a human with all others under AI control receiving orders from the section-leader; and all members of the section were AI controlled (including the section leader). These three conditions were then contrasted employing the suite of metrics. Significant differences were found between all three control configurations, though in general human-leading-AI (the second configuration with a single human player) was more similar to the all-human-controlled outcomes (as measured by the metric scores) than to the all-AI-controlled. The chapter then continues with the description of and results for a set of subjective experiments that took the form of a “group Turing test”. The scenario runs conducted for the objective analysis phase of the chapter were replayed for human observers. Observers watched the action unfold within the virtual environment and for each replay were asked to make a determination of whether they were observing an all-human-controlled section, a human-leading-AI section, or an all-AI section; and what features they employed to make that decision. It was found that observers were not able to reliably determine which of the three control modalities they were observing – several observers performed only at chance level while even the best observer was considerably less than 100% accurate.

Throughout history military forces have employed simulation as a tool for training and planning. Perhaps the earliest instances of such simulation include the use of wooden weapons and set attack-and-response drills (sometimes referred to as kata or forms in martial arts); while famous board games such as Chess and Go (and many ancient games from all continents) have their roots in a simulation of warfare. Defence organisations were very early adopters of the digital computer – initially employing them for calculating ballistics tables and other such numerically intense tasks before moving onto computational models of warfare that could be implemented on a computer and thus employed as decision-support tools. Today, defence simulation is a sophisticated and multi-billion dollar industry that covers applications from training individual crew members in the use of their equipment through grand strategic decision-making and planning of acquisitions and force structures 20-30 years into the future.

Therefore it is no surprise that defence forces ranging from the Australian Army, US Marines, UK Ministry of Defence, Canadian Armed Forces to the Singapore Army, Israeli Armed Forces and the US Army make heavy usage of serious games technologies and products for their simulation needs. A more detailed historic and taxonomic coverage can be found in Barlow (2005) though the first usage of COTS-derived games can be traced back at least to 1980 with the US Army initiative to employ a modified version of the vector-graphics game Battlezone as a Bradley Fighting Vehicle crew trainer. Well-known instances of the application of serious games by the defence community have include mid 1990s work with the famous First Person Shooter (FPS) Doom by the US Marines, the usage of Microsoft Flight Simulator as part of US Navy ROTC pilot training, the release of the FPS recruitment tool America’s Army, and the wide-spread adoption of VBS (Virtual Battle Space) – an FPS military training tool derived from a commercial game – by many of the defence forces world-wide (Barlow, 2005).

This chapter addresses a very specific question regarding the defence usage of serious games. That concerns whether the ‘AI’ (Artificial Intelligence) or agents that inhabit games display the appropriate behaviours and possess sufficient sophistication such that they can be employed to support junior leadership training. In its most basic and distilled form that question becomes: Can a corporal learn key aspects of leadership, command, and control through usage of a military game in which they have command over AI?

Complete Chapter List

Search this Book:
Reset