Estimating Emotions Using Geometric Features from Facial Expressions

Estimating Emotions Using Geometric Features from Facial Expressions

A. Vadivel, P. Shanthi, S.G. Shaila
DOI: 10.4018/978-1-4666-5888-2.ch369
(Individual Chapters)
No Current Special Offers

Chapter Preview



Emotions are the emergent property of the human mind like consciousness and are emerging from the interaction among various core cognitive processes. Cognitive models are available for other core cognitive processes such as problem solving, decision making, memory, reasoning, etc. Though it is not clear about the origin of emotion, through emotional response, it is possible to create cognitive models of emotion. One kind of physiological or emotional response to the particular event via face is the facial expression. Human face plays an important role in interpersonal communication and considered as important channel compared to other communication channels while conveying mixed mode of information. Facial expression provides more visual cues about internal mental states of human being and is caused by muscle contraction to facial skin. As a result, there is a change in the appearance of facial features, say facial components such as the eyebrows, nose, and mouth. As facial expressions are natural feedback to others, it can be used in a wide variety of computer applications in which the system can recognize the human internal emotional state from the visual cues and can react accordingly. By developing valid and reliable methodologies to measure facial behavior, it can be used as a natural interface in various applications such as human–computer interaction, computer surveillance, gaming, entertainment, teleconference, medical field, education, etc. In this article, the emotion of human is estimated using suitable geometric features from the facial expressions. The obtained geometric feature based pattern to recognize the expression can be effectively used as a feedback mechanism in a classroom to measure the mental state of the students. Also, a suitable feedback scheme can be devised to improve the teaching-learning process.

Facial Action Coding System (FACS) proposed by Ekman and Friesen (1978) has been used for the manual interpretation of facial expression. This is the standard code followed by most of the researchers for facial behavioral study. This approach contains facial muscle movements in terms of 46 Action Units (AU) and a combination of these action units are used to identify various expressions. Based on the description of AUs, six common facial expressions such as angry, disgust, fear, happy, sad and surprise across various cultures are identified (Ekman & Friesen, 1971). This recognition is purely based on the changes to facial features like eyes, nose, mouth, etc. After FACS, this research issue has gained much interest on related fields like image processing for face detection, facial feature point extraction and tracking, etc. Various solutions are proposed for this issue and it can be broadly classified into two, namely, geometric and appearance based methods. Because of the facial configuration, which is not common for all, early works started to use the geometric deformation of features to recognize an expression in which, facial feature changes are measured in terms of relative distance and angle with reference to neutral face. Similar geometric deformation vectors are used in face modeling technology to create personalized avatars in virtual world. The talking head system has been proposed (Liu, Zhang, Jacobs, & Cohen, 2001), with linear combinations of AU stored as a mesh deformation vector. A facial expression is generated by adding the deformation vector to neutral face mesh. Similar approaches have also been proposed using geometric method (Jeng, Liao, Liu, & Chern, 1998; Lin & Wu, 1999), which uses a priori information for expression recognition. The limitation of these methods is that it is not suitable for handling multiple faces. Also, the temporal information has not been considered and variant to illumination and head pose make the priori information useless.

Key Terms in this Chapter

Feature Vector: Numerical representation of object features in n-dimensional vector and it can be used for pattern recognition and machine learning.

Facial Expression: Collection of muscle movements to communicate internal emotional information to others about the particular context.

Emotions: One of the cognitive parameter emerged from the interaction among various core cognitive processes according to the given external stimulus.

Confusion Matrix: Table to visualize the performance of the supervised machine learning algorithm and the diagonal shows the prediction accuracy of the algorithm.

Geometric Transformation: Processes of changing the position of each point in the shape which will possibly change the shape size, orientation and position.

Region of Interest: Selection of a subset of pixel of the given image for the particular purpose which may either be arbitrary region, or only a regular sub image of the input image.

Cognitive Science: Interdisciplinary study of various cognitive processes of the human mind like memory, problem solving, decision making, attention, emotions, etc., with an aim to create computational models.

Complete Chapter List

Search this Book: