Guidelines for Designing Computational Models of Emotions

Guidelines for Designing Computational Models of Emotions

Eva Hudlicka
Copyright: © 2011 |Pages: 54
DOI: 10.4018/jse.2011010103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Rapid growth in computational modeling of emotion and cognitive-affective architectures occurred over the past 15 years. Emotion models and architectures are built to elucidate the mechanisms of emotions and enhance believability and effectiveness of synthetic agents and robots. Despite the many emotion models developed to date, a lack of consistency and clarity regarding what exactly it means to ‘model emotions’ persists. There are no systematic guidelines for development of computational models of emotions. This paper deconstructs the often vague term ‘emotion modeling’ by suggesting the view of emotion models in terms of two fundamental categories of processes: emotion generation and emotion effects. Computational tasks necessary to implement these processes are also identified. The paper addresses how computational building blocks provide a basis for the development of more systematic guidelines for affective model development. The paper concludes with a description of an affective requirements analysis and design process for developing affective computational models in agent architectures.
Article Preview
Top

1. Introduction And Objectives

The past 15 years have witnessed a rapid growth in computational models of emotion and affective agent architectures. Researchers in cognitive science, AI, HCI, robotics, and gaming are developing ‘models of emotion’ for theoretical research regarding the nature of emotion, as well as a range of applied purposes: to create more believable and effective synthetic characters and robots, and to enhance human-computer interaction.

Yet in spite of the many stand-alone emotion models, and the numerous affective agent and robot architectures developed to date, there is a lack of consistency, and lack of clarity, regarding what exactly it means to ‘model emotions’ (Hudlicka, 2008b). ‘Emotion modeling’ can mean the dynamic generation of emotion via black-box models that map specific stimuli onto associated emotions. It can mean generating facial expressions, gestures, or movements depicting specific emotions in synthetic agents or robots. It can mean modeling the effects of emotions on decision-making and behavior selection. It can also mean including information about the user’s emotions in a user model in tutoring and decision-aiding systems, and in games.

There is also a lack of clarity regarding what affective states are modeled. The term ‘emotion’ in affective models can refer to emotions proper (short, transient states), moods, mixed states such as attitudes, and frequently states that are not considered by psychologists as emotions (e.g., confusion, flow).

Emotion models also vary greatly regarding exactly which of the many roles ascribed to emotions are modeled. These include goal management and goal selection, resource allocation and subsystem coordination, and communication and coordination among agents, and among virtual agents and humans.

One of the consequences of this terminological vagueness is that when we begin to read a paper addressing ‘emotion modeling’, we don’t really know what to expect. The paper could just as easily describe details of facial expression generation, affective speech synthesis, black-box models mapping domain-specific stimuli onto emotions, or decision-utility formalisms evaluating behavioral alternatives. A more serious consequence is a lack of design guidelines regarding how to model a particular affective phenomenon of interest: What are the computational tasks that must be implemented? Which theories are most appropriate for a given model? What are the associated representational and reasoning requirements, and alternatives? What data are required from the empirical literature?

The lack of consistent, clear terminology also makes it difficult to compare approaches, in terms of their theoretical grounding, their modeling requirements, and their theoretical explanatory capabilities and their effectiveness in particular applications.

The purpose of this paper is to attempt to deconstruct the vague term ‘emotion modeling’ by: (1) suggesting that we view emotion models in terms of two fundamental categories of processes: emotion generation and emotion effects; and (2) identifying some of the fundamental computational tasks necessary to implement these processes. These ‘model building blocks’ can then provide a basis for the development of more systematic guidelines for emotion modeling, theoretical and data requirements, and representational and reasoning requirements and alternatives. Identification of a set of generic computational tasks also represents a good starting point for a more systematic comparison of alternative approaches and their effectiveness. A systematic identification of the required building blocks also helps answer more fundamental questions about emotions: What are emotions? What is the nature of their mechanisms? What roles should they play in synthetic agents and robots? These computational building blocks can thus begin to serve as basis for what Sloman calls “architecture based definition of emotion” (Sloman, Chrisley, & Scheutz, 2005).

Complete Article List

Search this Journal:
Reset
Volume 11: 2 Issues (2020)
Volume 10: 2 Issues (2019)
Volume 9: 2 Issues (2018)
Volume 8: 2 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2015)
Volume 5: 2 Issues (2014)
Volume 4: 2 Issues (2013)
Volume 3: 2 Issues (2012)
Volume 2: 2 Issues (2011)
Volume 1: 2 Issues (2010)
View Complete Journal Contents Listing