Facial Expression Synthesis and Animation

Facial Expression Synthesis and Animation

Ioan Buciu, Ioan Nafornita, Cornelia Gordan
DOI: 10.4018/978-1-61692-892-6.ch009
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Living in a computer era, the synergy between man and machine is a must, as the computers are integrated into our everyday life. The computers are surrounding us but their interfaces are far from being friendly. One possible approach to create a friendlier human-computer interface is to build an emotion-sensitive machine that should be able to recognize a human facial expression with a satisfactory classification rate and, eventually, to synthesize an artificial facial expression onto embodied conversational agents (ECAs), defined as friendly and intelligent user interfaces built to mimic human gestures, speech or facial expressions. Computer scientists working in computer interfaces (HCI) put up impressive efforts to create a fully automatic system capable to identifying and generating photo - realistic human facial expressions through animation. This chapter aims at presenting current state-of-the-art techniques and approaches developed over time to deal with facial expression synthesis and animation. The topic’s importance will be further highlighted through modern applications including multimedia applications. The chapter ends up with discussions and open problems.
Chapter Preview
Top

Introduction

In human – to – human interaction (HHI) people mostly use their face and gestures to express emotional states. When communicating with each other, people involve both verbal and non-verbal communication ways: speech, facial expression or gestures (nods, winks, etc). As pointed out by Mehrabian (Mehrabian 1968), people express only 7% of the messages through a linguistic language, 38% through voice, and 55% through facial expressions. Closely related to HHI, Human-computer interaction (HCI) deals with the ways humans communicate with machines. The term is broad and has an interdisciplinary character concerning various scientific fields, such as computer science, computer graphics, image processing, neurophysiology, and psychology. We should note that human-computer interface differs from brain-computer interfaces as the latter describe the communications between brain cells and machine and mainly involves a direct physical link.

Our psychological need of being surrounded by “human–like” machines in terms of their physical appearance and behavior is the main driving force behind the necessity of developing realistic human-computer interfaces. We would like machines acting like us, interpreting our facial expressions or gestures conveyed by emotions and respond accordingly. During the last decade the endeavor of scientists for creating emotion-driven systems was impressive. Although facial expressions represent a prominent way of revealing an emotional state, emotional states may also be expressed, coupled or associated to other modalities, such as gestures, change in the intonation, stress or rhythm of speech, blood pressure, etc. However, within the context of this chapter, we only consider facial expressions whenever we refer to emotion.

A fully automatic facial expression analyzer should be able to handle the following tasks (Krinidis, 2003):

  • 1.

    Detect (and track) the face in a complex scene with random background;

  • 2.

    Extract relevant facial features;

  • 3.

    Recognize and classify facial expressions according to some classification rules.

    • 4.

Likewise, a facial expression synthesizer should:

  • 1.

    Create realistic and natural expressions;

  • 2.

    Operate in real time;

  • 3.

    Require minimum user interaction in creating the desired expression;

  • 4.

    Be easily and accurately adaptable to any individual face.

This chapter is focusing on the facial expression synthesis part and animation of synthesized expressions. Once the facial expression is synthesized the facial animation comes next, a task intensively employed in computer graphics applications. For instance, in the film industry, moviemakers try to build virtual human characters that are indistinguishable from the real ones. In the games industry, the designed human characters should be interactive and as realistic as possible. Commercial products are available to be used by users to create realistic looking avatars for chatting rooms, e-mails, greeting cards, or teleconferencing. Face synthesis techniques have also been used for compression of talking head in the video conferencing scenario, such as MPEG – 4 standard (Raouzaiou, 2002).

The purpose of this chapter is to present current state–of–the–art techniques and approaches developed over time concerning facial expression synthesis and animation. These techniques will be elaborated throughout the chapter along with their limitations. The topic’s importance is highlighted through modern applications. The chapter ends up with further discussions and open problems.

Complete Chapter List

Search this Book:
Reset