Emotional Modeling in an Interactive Robotic Head

Emotional Modeling in an Interactive Robotic Head

Oscar Deniz, Javier Lorenzo, Mario Hernández, Modesto Castrillón
DOI: 10.4018/978-1-60566-354-8.ch001
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Social intelligence seems to obviously require emotions. People have emotions, recognize them in others and also express them. A wealth of information is conveyed through facial expressions, voice tone, etc. If robots can recognize and express emotions, the interaction with the user will be improved because the robot will be able to analyze his/her affective state and choose a different action course depending on it. Thus, it seems clear that any attempt to imitate human social abilities should consider modeling emotions or affective states. This chapter describes the emotional model and implementation of CASIMIRO, a prototype social robot built by the authors. CASIMIRO is a complex robot with multimodal capabilities defined by a number of software modules. Examples of user interactions will be also shown that suggest that the model is appropriate for regulating the behavior of the robot.
Chapter Preview
Top

Introduction

Although the use of emotions in robots is still under debate, in the last years many authors have argued that the traditional “Dr Spock” paradigm for solving problems (eminently rational) may not be appropriate for modeling social behavior. Rational decisions allow us to cope with the complex world that we live in. Thus, the rational selection among different options is crucial for survival and goal accomplishment. However, any agent (human or artificial) whose actions are guided only by purely rational decisions would be in serious trouble. Weighing all the possible options would prevent the agent from taking any decision at all. There is evidence that people who have suffered damage to the prefrontal lobes so that they can no longer show emotions are very intelligent and sensible, but they cannot make decisions (Picard, 1997; Damasio, 1994). A so-called “Commander Kirk” paradigm assumes that some aspects of human intelligence, particularly the ability to take decisions in dynamic and unpredictable environments, depend on emotions.

There is another interpretation, however, which makes clear the importance that emotion modeling may have in a robot. Social intelligence seems to obviously require emotions.

People have emotions, recognize them in others and also express them. A wealth of information is conveyed through facial expressions, voice tone, etc. If robots can recognize, express and probably have emotions, the interaction with the user will be improved because the robot will be able to analyze the affective state of the user and choose a different action course depending on it (Hernández et al., 2004). Thus, it seems clear that any attempt to imitate human social abilities should consider modeling emotions or affective states. In fact, a field called Affective Computing (Tao & Tan, 2005) is developing which aims at developing engineering tools for measuring, modeling, reasoning about, and responding to affect.

This chapter describes the emotional model implemented in a prototype sociable robot called CASIMIRO, see Figure 1. CASIMIRO (Deniz et al., 2006; Deniz et al., 2007) is an animal-like face with basic interaction abilities achieved through computer vision, audio signal processing, speech generation, motor control, etc. The abilities include omnidirectional and stereo vision, face detection, head nod/shake gesture detection (for answering questions), person detection and tracking (using the neck), sound localization, speech, owner recognition, etc. The focus is in providing useful techniques for researchers working on emotional modeling for interactive robots.

Figure 1.

CASIMIRO

978-1-60566-654-8.ch001.f01
Top

Background

Many emotional models have been proposed both within the Robotics community and in psychology (see (Fong et al., 2003) and also the Emotion Home Page (E. Hudlicka & J.M. Fellous, 2008)). The most well known model for human emotion representation is perhaps that of Russell (Russell, 1980), which considers that emotions fall in a bidimensional space, with orthogonal valence and arousal components, see Figure 2.

Figure 2.

Arousal and valence emotional space

978-1-60566-654-8.ch001.f02

This bidimensional space (also called circumplex model) has received wide support in the literature (Carney & Colvin, 2005). Many forms of human emotional experience (judgment of the similarity between pairs of affect terms, self-reports of current emotion and from perceptions of similarity between static photographs of expressed emotion) point to an ordering of basic emotions around the perimeter of a circle with arousal and valence axes.

Complete Chapter List

Search this Book:
Reset