Emotion Recognition from Facial Expression and Electroencephalogram Signals

Emotion Recognition from Facial Expression and Electroencephalogram Signals

Amit Konar, Aruna Chakraborty, Pavel Bhowmik, Sauvik Das, Anisha Halder
DOI: 10.4018/978-1-61350-429-1.ch017
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter proposes new approaches to emotion recognition from facial expression and electroencephalogram signals. Subjects are excited with selective audio-visual stimulus, responsible for arousal of specific emotions. Manifestation of emotion which appears in facial expression and EEG are recorded. Subsequently the recorded information is analyzed for extraction of features, and a support vector machine classifier is used to classify the extracted features into emotion classes. An alternative scheme for emotion recognition directly from the electroencephalogram signals using Duffing Oscillator is also presented. Experimental results are given to compare the relative merits of the proposed schemes with existing works.
Chapter Preview
Top

Introduction

Emotions represent internal psychological states of the human mind (Gordon, 1990). Recognition of human emotion from its external manifestations, such as facial expressions, voice, and physiological signals is a complex decision making problem. Several approaches to solve this problem have been attempted, but no satisfactory solution is known until this time. Usually, emotion recognition is regarded as a pattern classification/clustering problem. So, like classical pattern classification, here too, we extract representative features from the external manifestation of the subject, pre-process them, and then feed them to a classifier to classify the manifestation into one of many possible emotion classes, such as anger, fear, happiness and the like. The main hurdles in emotion recognition lie in identification of the features, designing appropriate pre-processing/filtering algorithms to segregate the emotive components from the natural ambiance, and selection of the most appropriate classification/clustering algorithm to classify the emotions from their measured attributes.

Although there is a vast literature on each of the above sub-problems, emotion recognition is still a hard problem for the following reasons. First, the level of ambience of individuals differs significantly. For example, the physiological conditions including blood pressure, body temperature, electrocardiogram (ECG), electromyogram (EMG) and electroencephalogram (EEG) of individual subjects in presence and absence of a specific emotive experience widely vary, and thus finding a generic consensus for the ambience is not always easy. Further, a subject experiencing similar emotions at different time often is found to have significant differences in his/her external manifestations. Naturally, accurately identifying one’s correct emotional state from the measurements of his/her physiological conditions is also difficult. Moreover, subjects excited with stimulus responsible for arousal of a specific emotion, sometimes have a manifestation for mixed emotions. Recognition of emotion becomes more complex, when the subjects arouse mixed emotions.

The early research on emotion recognition was mainly confined in facial expression analysis (Ekman and Friesen, 1975; Fernandez-Dols et al. 1991; Black and Yacoob, 1997; Essa and Pentland, 1997; Donato et al. 1999; Zeng et al., 2006).This period continued for around two decades. The primary aim of emotion research at this period was to study the performance of the recognition algorithms. As a sequel, several classification algorithms involving neural nets (Kobayashi and Hara, 1993; 1993a; 1993b ; Ueki et al. 1994; Kawakami et al. 1994, Rosenblum et al. 1996; Uwechue and Pandya, 1997; Chakraborty, 2009b), fuzzy sets (Izumitani, 1984), and optic flow (Mase, 1991; Yacoob and Dadvis, 1996; Sprengelmeyer, 1998) have been attempted to solve the emotion recognition problem. Since the beginning of the millennium, researchers took active interest in designing algorithms for emotion recognition from multiple sources, including facial expression, voice and physiological signals, such as pulse rate, body temperature and ECG (Takahashi, 2004; Kollias and Karpouzis, 2005; Castellano et al. 2007). Almost at the same period, a small fraction of emotion researchers attempted to develop a new firmware for emotion recognition/synthesis for possible integration in the next generation human-computer interactive (HCI) systems (Lisetti and Schiano, 2000; Cowie et al., 2001; Brave and Nass, 2002). The next generation computers are thus expected to be smarter than the traditional ones, as they would have the potential to recognize the emotion of the users, and synthesize its emotional reaction to the user input. Bashyal and Venayagamoorthy (2008) employed Gabor wavelet and learning vector quantization for recognition of facial expression.

Complete Chapter List

Search this Book:
Reset