Artificial Mind for Virtual Characters

Artificial Mind for Virtual Characters

Iara Moema Oberg Vilela
DOI: 10.4018/978-1-59904-996-0.ch011
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter discusses guidelines and models of Mind from Cognitive Sciences in order to generate an integrated architecture for an artificial mind that allows various behavior aspects to be simulated in a coherent and harmonious way, showing believability and computational processing viability. Motivations are considered the quantitative, driving forces of the action selection mechanism that guides behavior. The proposed architecture is based on a multi-agent structure, where reactive agents represent motivations (Motivation Agents) or actions (Execution Agents), and cognitive agents (Cognition Agents) embody knowledge-based attention, goal-oriented perception and decision-making processes. Motivation Agents compete for priority, and only winners can activate their corresponding Cognition Agents, thus filtering knowledge processing. Active Cognition Agents negotiate with each other to trigger a specific Execution Agent, which then may change internal and external states, displaying the corresponding animation. If no motivation satisfaction occurs, frustration is expressed by a discharge procedure. Motivations intensities are then accordingly decreased.
Chapter Preview
Top

Introduction

Convergence of artificial life, artificial intelligence and virtual environment techniques has given rise to intelligent virtual environments (Aylett and Luck, 2000; Aylett and Cavazza, 2001; Thalmann, 2003; Osório, Musse, Santos, Heinen, Braun and Silva, 2005). The simulation of inhabited virtual worlds where lifelike forms behave and interact may then provide more realistic visualizations of complex emergent scenarios. Applications are not restricted just to entertainment systems like game and virtual storytelling, as it may seem (Rist, André, and Baldes, 2003). Education (Antonio, Ramírez, Imbert, Méndez and Aguilar, 2005; Tortell and Morie, 2006; Chittaro, Ieronutti and Rigutti, 2005), training in dangerous situations involving people (Miao Hoppe and Pinkwart, 2006; Braga, 2006; Querrec, Buche, Maffre and Chevaillier, 2004), simulation of inhabited environment for security, adequacy analysis and evaluation or historical studies (Papagiannakis, Schertenleib, O’Kennedy, Arevalo-Poizat, Magnenat-Thalmann, Stoddart and Thalmann, 2005), product or service demonstration (Kopp, Jung, Lessmann and Wachsmuth, 2003), ergonomics (Colombo and Cugini, 2006; Xu, Sun and Pan, 2006), are some of many others possible uses for Intelligent Virtual Environments (IVE).

The key issue about virtual environments is immersion, or, as it is usually said, the suspension of disbelief. The system developer’s first concern tends to be related to graphical and sound issues, since senses are very important to involvement in a virtual world. However, when related to dynamic and complex environments, the believability of the virtual world elements behavior becomes paramount, especially if it includes life simulation. There are many levels of activity to be simulated, and modeling depends heavily on which is the main concern of the application. For instance, if we are simulating a garden just for aesthetic appreciation, there is no point in simulating complex interactions occurring between plants, and plants and other organisms of the world. It is different, though, if the same world is being simulated for ecological analysis.

This problem becomes more complex when virtual humans or humanoids are to be simulated. To show believability, the behavior must express an internal life, some kind of goal-driven attitude, even when it is erratic as in drunk or crazy people. But on what basis is it possible to accomplish this?

Virtual environment inhabitants are usually developed as agents with some varying levels of autonomy, which encapsulate specific context-sensitive knowledge to accomplish their role in the system application. What may be considered “behavior” goes from just body movements to complex interactions with the environment, depending on the character role. It is expected that when the underlying mechanism of behavior production is more similar to the actual functioning in the real world, more the resulting virtual behavior will be believable. That tends to be particularly true when complex and rich virtual worlds are being created. If a VE is simple, it may suffice just to imitate the real behavior. But if the virtual world is complex, and relies on emergence to produce behavior, just to emulate it may be too risky.

The present chapter focuses on virtual character behavior as believable sequence of actions of humanlike virtual entities, and not on character computer animation, or virtual biomechanical body movements. Some approaches are discussed, and a motivation-driven architecture is proposed based on assumptions derived from cognitive science.

The first part of the chapter focuses on various forms of modeling the relationship between characters and their environment in order to produce believable behavior. The second part discusses different approaches for modeling character behavior, and suggests the convenience of integrating the underlying mechanisms in a single high-level structure, the artificial mind. The third part presents a proposed motivation-driven architecture for artificial minds. Finally, conclusions are presented and future trends are discussed.

Complete Chapter List

Search this Book:
Reset