Towards Truly Autonomous Synthetic Characters with the Sigma Cognitive Architecture

Towards Truly Autonomous Synthetic Characters with the Sigma Cognitive Architecture

Volkan Ustun (University of Southern California, USA) and Paul S. Rosenbloom (University of Southern California, USA)
Copyright: © 2016 |Pages: 25
DOI: 10.4018/978-1-5225-0454-2.ch008
OnDemand PDF Download:
No Current Special Offers


Realism is required not only for how synthetic characters look but also for how they behave. Many applications, such as simulations, virtual worlds, and video games, require computational models of intelligence that generate realistic and credible behavior for the participating synthetic characters. Sigma (S) is being built as a computational model of general intelligence with a long-term goal of understanding and replicating the architecture of the mind; i.e., the fixed structure underlying intelligent behavior. Sigma leverages probabilistic graphical models towards a uniform grand unification of not only traditional cognitive capabilities but also key non-cognitive aspects, creating unique opportunities for the construction of new kinds of non-modular behavioral models. These ambitions strive for the complete control of synthetic characters that behave as humanly as possible. In this paper, Sigma is introduced along with two disparate proof-of-concept virtual humans – one conversational and the other a pair of ambulatory agents – that demonstrate its diverse capabilities.
Chapter Preview

1. Introduction

Twenty years ago, Tambe et al. (1995) discussed the generation of human-like synthetic characters that can interact with each other, as well as with humans, within the emerging domain of highly interactive simulations. Many of these simulations strove to create environments that looked realistic, and synthetic characters that looked and behaved as real people to the extent possible. The behavioral models in these simulations extensively utilized cognitive architectures (Langley, Laird, & Rogers, 2009) – models of the fixed structure underlying intelligent behavior in natural and/or artificial systems – as the underlying driver for human-like intelligent behavior. Twenty years later, developments in computer graphics and animation have allowed for extremely realistic-looking interactive simulation environments; it is now possible to create almost photo-real synthetic characters with realistic gaits and gestures. However, progress in behavior generation has been more mixed. Mainstream cognitive architectures, including Soar and ACT-R, originated as production systems and are fairly capable of modeling the reactive, knowledge-intensive, and goal-driven aspects of human behavior. For example, Tambe et al.’s (1995) work in the air-combat simulation domain utilized Soar (Laird, 2012) to model the behavior of pilots. These cognitive architectures are also capable of working in real time and, in ACT-R’s case, with explicit models of human reaction times and limitations. However, they have not yet been able to successfully incorporate all the capabilities that are required for human-like intelligence.

As Swartout (2010) has pointed out, behaving like real people requires synthetic characters to, among other things: (1) use their perceptual capabilities to observe their environment and other virtual/real humans in it; (2) act autonomously in their environment based on what they know and perceive, e.g. reacting and appropriately responding to the events around them; (3) interact in a natural way with both real and other virtual humans using verbal and nonverbal communication; (4) possess a Theory of Mind (ToM) to model their own mind and the minds of others; (5) understand and exhibit appropriate emotions and associated behaviors; and (6) adapt their behavior through experience. The Soar and ACT-R communities worked toward addressing these six capabilities for synthetic characters (referred to as the capability list hereafter) but some items were simply not feasible within the core architecture. For example, external modules were required for acceptable perceptual and communication capabilities. Likewise, most of the emotion models were also outside the core. More importantly, they haven’t been able to fully capture the advances that have been made in recent years in behavioral adaptation, or in other words, learning. A number of aspects of learning were successfully incorporated but generality in statistical machine learning, for example, has eluded them.

Probabilistic graphical models (Koller & Friedman, 2009) provide a general tool that combines graph theory and probability theory to enable efficient probabilistic reasoning and learning in ways that haven’t been possible with traditional cognitive architectures based on production systems. The machine learning community employs such models as one of its primary tools, yielding state-of-the-art results for at least four of the listed capabilities that have challenged traditional cognitive architectures: perception, autonomy, interaction and adaptation. However, most of these improvements have been achieved independently, as examples of narrow Artificial Intelligent (AI) systems, with little effort toward cross-integration.

One of the main reasons for the relative lack of integration efforts is the inherent difficulty of general intelligence research: not only is it challenging to integrate all of the requisite capabilities, but it is also hard to measure the resulting incremental progress toward human-like intelligence (Goertzel, 2014). In contrast, many forms of narrow AI systems can easily track incremental progress as they try to improve “intelligent” behaviors in very specific contexts. Therefore, it is easier to assess the merit of these systems through simple comparisons. This strategy has helped narrow AI approaches dominate the research in the last two decades.

Complete Chapter List

Search this Book: