Simplifying the Design of Human-Like Behaviour: Emotions as Durative Dynamic State for Action Selection

Simplifying the Design of Human-Like Behaviour: Emotions as Durative Dynamic State for Action Selection

Joanna J. Bryson (University of Bath, UK) and Emmanuel Tanguy (University of Bath, UK)
Copyright: © 2010 |Pages: 21
DOI: 10.4018/jse.2010101603
OnDemand PDF Download:
No Current Special Offers


Human intelligence requires decades of full-time training before it can be reliably utilized in modern economies. In contrast, AI agents must be made reliable but interesting in relatively short order. Realistic emotion representations are one way to ensure that even relatively simple specifications of agent behavior will be expressed with engaging variation, and those social and temporal contexts can be tracked and responded to appropriately. We describe a representation system for maintaining an interacting set of durative states to replicate emotional control. Our model, the Dynamic Emotion Representation (DER), integrates emotional responses and keeps track of emotion intensities changing over time. The developer can specify an interacting network of emotional states with appropriate onsets, sustains, and decays. The levels of these states can be used as input for action selection, including emotional expression. We present both a general representational framework and a specific instance of a DER network constructed for a virtual character. The character’s DER uses three types of emotional state as classified by duration timescales, keeping with current emotional theory. We demonstrate the system with a virtual actor. We also demonstrate how even a simplified version of this representation can improve goal arbitration in autonomous agents.
Article Preview


Emotion is a popular topic in AI research, but most existing work focuses on the appraisal of emotions or mimicking their expression for HCI (see review below). Our research is concerned with their role in evolved action-selection mechanisms. In nature, emotions provide decision state which serves as a context for limiting the scope of search for action selection (LeDoux, 1996). This state is sustained more briefly than traditional (life-long) learning, but longer than simple reactive responses.

Improving the realism of emotion representations can allow us to not only improve the realism of intelligent virtual actors, but also to make programming them easier. Rather than needing to describe the exact details of a facial expression, a behaviour script can simply specify abstract concepts like emphasis or intentional, communicative expression gestures such as smile in greeting. In real-time, the agent can then interpolate these instructions with its current emotional state. This latter in turn reflects the agent’s recent experiences. For example, a FAQ agent that has just been accessed might respond with more apparent enthusiasm than one that has been interacting with a client and receiving verbal abuse (Brahnam & De Angeli, 2008). For commercial applications this is often desirable, since companies do not want to be represented by “stupid” agents. Another place where real-time emotion tracking is useful is for home assistance agents. Instructions (e.g., to take medication or remind the user that the stove is on) need to be reliable and clear, yet they cannot be always presented identically or else even patients with severely compromised short-term memory can become habituated. Using recent interaction history as a seed to vary delivery style is one mechanism for maintaining variation in presentation style, as well as potentially increasing user engagement.

To this end, we have developed mechanisms for modelling both the temporal course of emotional state and the interactions between such states. We have an elaborate model for complex, human-like emotions for generating realistic facial expressions, the Dynamic Emotion Representation (DER) (Tanguy, Willis, & Bryson, 2003; Tanguy, 2006). For applications with less demand for emotional complexity, we also present a simplified system called Flexible Latching. This provides basic goal arbitration as a part of an action selection mechanism without requiring as much programming. Both systems track systems of emotion and/or drive intensities which change and interact over time. The actions and emotional response of agents containing such durative-state models depends on their recent history as well as their individual priorities or personality and their environment.

Our durative-state systems assume other independent mechanisms for appraising the agent’s situation and expressing the emotional responses. Developers using our representations can specify and describe both the number and the attributes of fundamental emotions and express how they interact. In this respect, these emotion representation systems are similar to spreading activation action-selection systems (e.g., Maes, 1991). They are designed to be the root of an agent’s action selection, determining the current goal structure. Note that in this fully modular system, additional “higher order” emotions may either be interpreted as emerging from the interactions of fundamental emotions, or they can be introduced with explicit representations—the choice is left to the developer.

We begin this article with a review of the concepts and literature. We next give a detailed description of the relatively complex mechanism, the DER, capable of producing biomimetic human-like emotions. We then describe for the more basic action-selection aspects of goal arbitration Flexible Latching. Finally, we describe full implementations of each mechanism demonstrating their roles as parts of complete systems.

Complete Article List

Search this Journal:
Open Access Articles
Volume 13: 2 Issues (2022): Forthcoming, Available for Pre-Order
Volume 12: 2 Issues (2021): Forthcoming, Available for Pre-Order
Volume 11: 2 Issues (2020)
Volume 10: 2 Issues (2019)
Volume 9: 2 Issues (2018)
Volume 8: 2 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2015)
Volume 5: 2 Issues (2014)
Volume 4: 2 Issues (2013)
Volume 3: 2 Issues (2012)
Volume 2: 2 Issues (2011)
Volume 1: 2 Issues (2010)
View Complete Journal Contents Listing