Detecting Facial Expressions for Monitoring Patterns of Emotional Behavior

Detecting Facial Expressions for Monitoring Patterns of Emotional Behavior

Nikolaos Bourbakis (Assistive Technologies Research Center (ATRC), College of Engineering and Computer Science, Wright State University, Dayton, OH, USA)
DOI: 10.4018/ijmstr.2013040101
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Detecting faces and facial expressions has become a common task in human-computer interaction systems. A face-facial detection system must be able to detect faces under various conditions and extract their facial expressions. Many approaches for face detection have been proposed in the literature mainly dealing with the detection or recognition of faces in still conditions rather than the person’s facial expressions and the reflecting emotional behavior. In this paper, the author describes a synergistic methodology for detecting frontal high-resolution color faces and for recognizing their facial expressions accurately in realistic conditions both indoor and outdoor, and with a variety of conditions (shadows, high-lights, non-white lights). The methodology associates these facial expressions to emotional behavior. It extracts important facial features, such as eyes, eyebrows, nose, mouth (lips) and defines them as the primitive elements of an alphabet of a simple formal language in order to synthesize these facial features and generate emotional expressions. The main goal of this effort is to monitor emotional behavior and learn from it. Illustrative examples are also provided for proving the concept of the methodology.
Article Preview

1. Introduction

A great number of different approaches for face detection and facial expressions recognition have been proposed (Ekman 1992a, Morishima 2001, Bourbakis 2008, Fasel-Luettin 2003, Hjelmas-Low 2001). To achieve a good performance, many of these methods assume (even tough this is not always true in realistic data sets) that the face is either segmented or surrounded by a simple background and the images are well illuminated with frontal-facial pose. The robustness of these approaches is challenged by many factors such as changes in illumination across the scene, shadows, cluttered backgrounds, image scale, facial pose, orientation and facial expressions. Specifically in (Zeng et al., 2009, Jain et al., 1999, Wiskott et al.1997, Kong 2005, Wang et al., 2009, Whitehill et al., 2009, Tsalakanidou-Malassiotis 2009) the most recent and most representative face recognitions methods have been presented. These methods have been to some degree have been compared for a number of parameters, like complexity, real-time performance, robustness in real cases and other. These surveys have almost exhausted all the methods for face recognition and up to a good point have addressed the facial expressions recognition. In addition, recently in (Wang et al., 2009) a face recognition under different illumination is presented in (Whitehill et al., 2009) a smiling facial expression is studied and in (Tsalakanidou-Malassiotis 2009) a completely automated facial action and facial expression recognition system using 2D+3D images recorded in real-time by a structured light sensor is presented. Also, in (Wiskott et al.1997) another face recognition method using elastic bunch graph matching based on the Voronoi grid is described. The latest method although is somehow close to our approach, it has many differences, such as (i) our LG graph model (Bourbakis 1988) is different than the Voronoi grid used by the authors in (Wi0skott et al.1997); (ii) we represent in detail local regional information as nodes of the LG graph, which also holds the geometry of the main features of a face (eyes, eye-brows, nose, lips) and they use Gabor wavelet coefficients to achieve a more accurate location of the nodes and to disambiguate patterns that would be similar in their coefficient magnitudes and they employ object adapted graphs, so that nodes refer to specific facial landmarks, called fiducial points; (iii) we use the LG graph that connects the largest region with all the other regions while they use the bunch graph, that serves as a generalized representation of faces by combining jets of a small set of individual faces.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 5: 4 Issues (2017): 2 Released, 2 Forthcoming
Volume 4: 4 Issues (2016)
Volume 3: 4 Issues (2015)
Volume 2: 4 Issues (2014)
Volume 1: 4 Issues (2013)
View Complete Journal Contents Listing