Learned Behavior: Enabling Believable Virtual Characters through Reinforcement

Learned Behavior: Enabling Believable Virtual Characters through Reinforcement

Jacquelyne Forgette, Michael Katchabaw
Copyright: © 2016 |Pages: 30
DOI: 10.4018/978-1-5225-0454-2.ch004
OnDemand:
(Individual Chapters)
Available
$33.75
List Price: $37.50
10% Discount:-$3.75
TOTAL SAVINGS: $3.75

Abstract

A key challenge in programming virtual environments is to produce virtual characters that are autonomous and capable of action selections that appear believable. In this chapter, motivations are used as a basis for learning using reinforcements. With motives driving the decisions of characters, their actions will appear less structured and repetitious, and more human in nature. This will also allow developers to easily create virtual characters with specific motivations, based mostly on their narrative purposes or roles in the virtual world. With minimum and maximum desirable motive values, the characters use reinforcement learning to drive action selection to maximize their rewards across all motives. Experimental results show that a character can learn to satisfy as many as four motives, even with significantly delayed rewards, and motive changes that are caused by other characters in the world. While the actions tested are simple in nature, they show the potential of a more complicated motivation driven reinforcement learning system. The developer need only define a character's motivations, and the character will learn to act realistically over time in the virtual environment.
Chapter Preview
Top

Introduction

In creating virtual environments, whether for entertainment or serious purposes, believable virtual characters have long been a challenge to developers and users. For large immersive virtual worlds, users have come to expect the presence of computer-generated characters as an essential element of the experience. These characters must act in a way that is reasonable and consistent with their personae, the world, and the context in which they are operating; in other words, they must act believably.

The requirements for achieving a believable virtual character are extensive and non-trivial, but the rewards in terms of immersion and satisfaction make this an important problem to solve (Bailey & Katchabaw, 2008). As written in (Rizzo et al, 1997), characters “are considered believable when they are viewed by an audience as endowed with thoughts, desires, and emotions, typical of different personalities”. This definition of believability is not necessarily a definition of character but an illusion of life, permitting the audience’s suspension of disbelief and acceptance of the virtual as real, at least for a time. While this idea of believability has long been studied in other disciplines, its difficulty in relation to virtual environments invariably comes back to the required interactivity of a virtual character (Loyall, 1997). This level of interactivity requires autonomous and flexible behavior that is not defined a priori. Traditionally, character behavior has been driven by static scripting, pre-coded expert knowledge, or search algorithms. These approaches, however, are simply not sufficient to enable believability; only with narrow problem domains are they sufficient as a solution (Rabin, 2010). Consequently, a more robust cognitive solution is clearly necessary.

In this chapter, we present an approach to character believability enabled by reinforcement learning (Mitchell, 1997) in which behavior is guided by a model of a character’s own unique motivations. In this way, developers are able to develop virtual characters by directly referencing character traits as defined by the underlying narrative or the needs of the virtual environment. The characters will, in turn, learn what actions they must take to benefit their respective motives, in real-time. Not only does this result in interesting and realistic unscripted behavior tuned to the particulars of each individual character, but it does so without the complexity and expense of an extensive and exhaustive programming exercise, as appropriate behaviors can be learned without explicit coding.

The theory behind this approach is based on work done by Reiss (Reiss, 2004; Reiss & Havercamp, 1998), in which motives are the reason that causes a person to initiate and perform a voluntary behavior, and unusually strong or weak desires are used to characterize an individual. In Reiss’s model, motives capture the basic desires of an individual, including power, curiosity, independence, status, social contact, vengeance, honor, idealism, physical exercise, romance, family, order, eating, acceptance, tranquility, and saving. Studies have shown that this theory can be used to describe diverse higher-order aspects of behavior such as religious beliefs (Reiss, 2000), athleticism (Reiss et al, 2001), and lack of scholastic achievement (Reiss, 2004). In general, motives are said to affect perception, cognition, emotion, and ultimately the resultant behaviors. A person must have a motive to perform any particular action, even if that person is not consciously aware of the motive, and so this model is a reasonable basis for believable virtual characters (Bailey et al, 2012). In this approach, when a particular motive is far from satisfied, a character will take appropriate actions that address this shortfall. Similarly, when a motive is fulfilled more than desired, the character will take actions that compensate and avoid actions that contribute to its further advancement.

Complete Chapter List

Search this Book:
Reset