Mapping Artificial Emotions into a Robotic Face

Mapping Artificial Emotions into a Robotic Face

Gabriele Trovato, Atsuo Takanishi
DOI: 10.4018/978-1-4666-7278-9.ch011
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Facial expressions are important for conveying emotions and communication intentions among humans. For this reason, humanoid robots should be able to perform facial expressions which represent their inner state in a way that is easy to understand for humans. Several humanoid robots can already perform a certain set of expressions, but their capabilities are usually limited to only the most basic emotions. It is necessary to consider a wider range of expressions and take advantage of the use of asymmetry. This chapter describes these aspects as well as insights about artificial emotions models, the mapping of human face into robotic face and finally the generation of facial expressions.
Chapter Preview
Top

Introduction

In the near future humanoid robots are expected to play a bigger role in the society, and interacting and communicating are abilities that are necessary to integrate in the society.

As communication between two humans is achieved through the simultaneous use of both verbal and non-verbal communication, humanoid robots should be able to use these two channels. As humans, we use different types of non-verbal cues, such as kinesics, proxemics, haptics, and paralanguage (Knapp, 1980). Mehrabian and Wiener (1967) were the first who underlined the importance of non-verbal communication, stating that the non-verbal channel is even more important than words when the content of the communication involves emotions.

Non-verbal communication can have different functions. It can express a mental state through the exhibition of affect displays (Mehrabian & Friar, 1969), (Patterson et al., 1986), cues about individuals' personality (Mehrabian & Friar, 1969), (Mehrabian, 1972), hints about the current cognitive state (Poggi, 2001), (Pelachaud & Poggi, 2002), attitude and anxiety levels (Vinayagamoorthy et al., 2006), and relations between people.

In a conversation, the complimentary information conveyed by facial expressions is useful for the interlocutor to understand the mental state of the speaker and even to detect lies (Ekman, 2009). As the face is considered the most important body area and channel of non-verbal communication (Harper et al., 1978), the expressiveness of the face is an important ability for a humanoid robot.

While a few examples of robots that can already perform a certain number of facial expressions exist, their number is usually limited to the most basic expressions (fear, anger, disgust, happiness, sadness, and surprise) and the patterns are pre-defined. There is a need to go beyond this traditional approach, and rather map the artificial emotions into the robotic face. This parametrical approach would make the robot able to display composite emotions. Moreover, the same concept could be extended to the generation of facial expressions which represent not strictly emotions, but are rather communication acts (such as incomprehension or rebuke) that usually are present during a conversation.

Quality of expressions can be improved taking asymmetry into account. Human face is often not symmetrical over the central vertical line. Both emotional expressions and the face at rest can show signs of asymmetry. In character animation, asymmetry is an important way of making a drawn character not appear stiff and still (Thomas & Johnston, 1995). We want to use asymmetry on the robot to produce expressions that look more natural, and thus are more easily recognised. In case of 3D avatars, implementation of asymmetry in a facial generator has been already attempted (Ahn et al., 2010), (Ahn et al., 2011). However, to the best of our knowledge, there is no study been done so far on asymmetry in a robotic face.

Key Terms in this Chapter

Communication Act: Minimal unit of communication, performed via verbal or non-verbal channels.

Action Unit: Basic unit of observable movement in human face, used in Ekman’s Facial Action Coding System.

Hemiface: One side (left or right) of the face.

Classification: Assignment of an input feature value to a certain class.

Non-Verbal Communication: The process of communication conveyed through sending non-verbal cues.

Facial Cue: Basic movement of a facial part, which contributes to an expression.

Complete Chapter List

Search this Book:
Reset