Statistical Analysis of Facial Expression on 3D Face Shapes

Statistical Analysis of Facial Expression on 3D Face Shapes

Jacey-Lynn Minoi, Duncan Gillies
DOI: 10.4018/978-1-60960-541-4.ch008
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The aim of this chapter is to identify those face areas containing high facial expression information, which may be useful for facial expression analysis, face and facial expression recognition and synthesis. In the study of facial expression analysis, landmarks are usually placed on well-defined craniofacial features. In this experiment, the authors have selected a set of landmarks based on craniofacial anthropometry and associate each of the landmarks with facial muscles and the Facial Action Coding System (FACS) framework, which means to locate landmarks on less palpable areas that contain high facial expression mobility. The selected landmarks are statistically analysed in terms of facial muscles motion based on FACS. Given that human faces provide information to channel verbal and non-verbal communication: speech, facial expression of emotions, gestures, and other human communicative actions; hence, these cues may be significant in the identification of expressions such as pain, agony, anger, happiness, et cetera. Here, the authors describe the potential of computer-based models of three-dimensional (3D) facial expression analysis and the non-verbal communication recognition to assist in biometric recognition and clinical diagnosis.
Chapter Preview
Top

Introduction

Facial expressions provide important information to channel non-verbal communications. The human face contains not only information about the identity, gender and age of a person but also their cognitive activity, emotional states, personality and intensions. The ability to discriminate accurately between expressed emotions is an important part of interaction and communication with others. It is also useful in helping listeners to elicit the intended meaning of spoken words. Research conducted by Mehrabian (1968) revealed that although humans have verbal language, messages shown on the face provide extra information supplementing verbal communication. The author stated that 55% of effective face-to-face human communication depends on facial expressions, while only 45% relies on languages and non-verbal body gestures (such as goodbye, pointing, drooping the head and etc.).

Interestingly, humans can recognize the different facial expressions of an unfamiliar person and recognize a familiar person regardless of the person’s facial expression. In the interaction between human and machines, a duality exists making it possible to automatically recognize faces and facial expressions in natural human-machine interfaces. These interfaces may be useful in behavioural science, robotic and medical science applications. For example, robotics for clinical practice could benefit from the ability to recognize facial expressions. Even though humans have acquired powerful capabilities of language, the role of facial expressions in communication remains substantial.

Here, we discuss the quantitative analysis of facial expression data using a collection of 3D face surface datasets. Each surface is recorded in an array of surface points in 3-dimensional space. Fiducial landmark points were selected based on craniofacial anthropometry (Kolar & Slater, 1997) and Facial Action Coding System (FACS) frameworks (Ekman & Friesen, 1978). Three-dimensional face data is preferred as it contains additional geometric data used to eliminate some of the intrinsic problems associated with 2D faces systems. Furthermore, 3D geometry of a face is invariant to changes in lighting and head pose conditions.

In this paper, we discuss quantitative analysis of facial expressions from a collection of 3D face surface dataset. Each 3D surface was annotated with the same set of chosen landmarks. The movement of landmark points on palpable features as well as non-palpable facial features are analysed according to the point motions. The spread and the variance of those landmarks with different subjects and facial expressions were studied and analysed. From the gathered information, we conduct facial expression analysis and recognition, and face recognition on the selected landmarks and dense surfaces.

Complete Chapter List

Search this Book:
Reset