Article Preview
TopIntroduction
Emotion is a popular topic in AI research, but most existing work focuses on the appraisal of emotions or mimicking their expression for HCI (see review below). Our research is concerned with their role in evolved action-selection mechanisms. In nature, emotions provide decision state which serves as a context for limiting the scope of search for action selection (LeDoux, 1996). This state is sustained more briefly than traditional (life-long) learning, but longer than simple reactive responses.
Improving the realism of emotion representations can allow us to not only improve the realism of intelligent virtual actors, but also to make programming them easier. Rather than needing to describe the exact details of a facial expression, a behaviour script can simply specify abstract concepts like emphasis or intentional, communicative expression gestures such as smile in greeting. In real-time, the agent can then interpolate these instructions with its current emotional state. This latter in turn reflects the agent’s recent experiences. For example, a FAQ agent that has just been accessed might respond with more apparent enthusiasm than one that has been interacting with a client and receiving verbal abuse (Brahnam & De Angeli, 2008). For commercial applications this is often desirable, since companies do not want to be represented by “stupid” agents. Another place where real-time emotion tracking is useful is for home assistance agents. Instructions (e.g., to take medication or remind the user that the stove is on) need to be reliable and clear, yet they cannot be always presented identically or else even patients with severely compromised short-term memory can become habituated. Using recent interaction history as a seed to vary delivery style is one mechanism for maintaining variation in presentation style, as well as potentially increasing user engagement.
To this end, we have developed mechanisms for modelling both the temporal course of emotional state and the interactions between such states. We have an elaborate model for complex, human-like emotions for generating realistic facial expressions, the Dynamic Emotion Representation (DER) (Tanguy, Willis, & Bryson, 2003; Tanguy, 2006). For applications with less demand for emotional complexity, we also present a simplified system called Flexible Latching. This provides basic goal arbitration as a part of an action selection mechanism without requiring as much programming. Both systems track systems of emotion and/or drive intensities which change and interact over time. The actions and emotional response of agents containing such durative-state models depends on their recent history as well as their individual priorities or personality and their environment.
Our durative-state systems assume other independent mechanisms for appraising the agent’s situation and expressing the emotional responses. Developers using our representations can specify and describe both the number and the attributes of fundamental emotions and express how they interact. In this respect, these emotion representation systems are similar to spreading activation action-selection systems (e.g., Maes, 1991). They are designed to be the root of an agent’s action selection, determining the current goal structure. Note that in this fully modular system, additional “higher order” emotions may either be interpreted as emerging from the interactions of fundamental emotions, or they can be introduced with explicit representations—the choice is left to the developer.
We begin this article with a review of the concepts and literature. We next give a detailed description of the relatively complex mechanism, the DER, capable of producing biomimetic human-like emotions. We then describe for the more basic action-selection aspects of goal arbitration Flexible Latching. Finally, we describe full implementations of each mechanism demonstrating their roles as parts of complete systems.