Facial Emotion Recognition System Using Entire Feature Vectors and Supervised Classifier

Facial Emotion Recognition System Using Entire Feature Vectors and Supervised Classifier

Manoj Prabhakaran Kumar, Manoj Kumar Rajagopal
DOI: 10.4018/978-1-7998-2108-3.ch003
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter proposes the facial expression system with the entire facial feature of geometric deformable model and classifier in order to analyze the set of prototype expressions from frontal macro facial expression. In the training phase, the face detection and tracking are carried out by constrained local model (CLM) on a standardized database. Using the CLM grid node, the entire feature vector displacement is obtained by facial feature extraction, which has 66 feature points. The feature vector displacement is computed in bi-linear support vector machines (SVMs) classifier to evaluate the facial and develops the trained model. Similarly, the testing phase is carried out and the outcome is equated with the trained model for human emotion identifications. Two normalization techniques and hold-out validations are computed in both phases. Through this model, the overall validation performance is higher than existing models.
Chapter Preview
Top

Introduction

Since the early 4th century (Aristotelian era), researchers have been interested in studying physiognomy and facial expression (Highfield, 2009). Physiognomy is the assessment of a person’s personality, character, or behavior based on his/her outer appearance, especially his/her face. But over the years, the study of physiognomy has not been focused on human behavior; only the facial expressions of humans has continuously been an active topic. The foundational studies that formed the basis of today’s research on facial expressions of humans can be tracked to the 17th century. In his book Pathomyotmia, gives details of muscle movements of the human head and various expressions in humans (Bulwer, 1649). In 1667, Le Brun’s lectured on the physiognomy of the face; latterly revised his theories in the book of (Montagu., 1994). In the 18th century, artists and actors referred to Le Burn’s book to achieve “the perfect imitation of ‘genuine’ facial expression.” However, in the 19th century, a facial expression has a direct relationship of the automatic facial expression and analysis was given by Darwin’s statements. In 1872 and 1965, Charles Darwin’s book states the principles of basic emotion evolved in both human and animal, and also grouped the various kinds of expressions and cataloged the facial deformations (Darwin, 1904). In 1884, William James proposes the “James Lange theory” that the various kinds of expressions have derived from the presence of stimuli in the body, which is evoke by physiological responses (James, 1884).

Another important milestone for the study of facial expressions and the six basic emotions (i.e., surprise, happy, disgust, fear, anger, and sad) in humans was outlined by (Friesen., 1969). (Ekman, 1977) developed the automatic human-facial-expression recognizer, and analyses of facial expressions’ muscular movements showed the different emotions in humans through photographic stimuli. Then, in 1978, Wallbott and Scherer determined an emotion from the body’s muscular movements and speech signals (Wallbott, 1986). Suwa, Mase, and Pentland established the automatic recognition of facial expressions for analyzing facial expressions from the image sequence of tracking points, but they did not clearly see these tracking points until the 1990s (A.Pentland, 1991) and (N.Sugie, 1978). Later, in 1992, Samal and Iyengar established the facial feature and expression analysis of tracking points in movie frames and also established the robust model of automatic facial expression recognition system, which required facial feature detection and a facial tracking system (Samal, 1992). Since the 1990s, several researchers have been more interested in emotion recognition in humans for Human Computer Interaction (HCI), affective computing, and so forth. Emotion recognition in humans was established by (Karpouzis, 2005), and the various modes of extraction are: physiological (EEG, ECG, etc.,) and non-physiological signals (face, body, speech, text, etc.). (Karpouzis, 2005), states that facial expression recognition is best out of the various modes of extracting emotion methods. Since 1990, researchers have been mostly concentrating on the robust model of automatic facial emotion in humans through the face, which is compared to other modes of extracting emotion.

Complete Chapter List

Search this Book:
Reset