Emotional State Recognition Using Facial Expression, Voice, and Physiological Signal

Emotional State Recognition Using Facial Expression, Voice, and Physiological Signal

Tahirou Djara, Abdoul Matine Ousmane, Antoine Vianou
Copyright: © 2018 |Pages: 20
DOI: 10.4018/IJRAT.2018010101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Emotion recognition is an important aspect of affective computing, one of whose aims is the study and development of behavioral and emotional interaction between human and machine. In this context, another important point concerns acquisition devices and signal processing tools which lead to an estimation of the emotional state of the user. This article presents a survey about concepts around emotion, multimodality in recognition, physiological activities and emotional induction, methods and tools for acquisition and signal processing with a focus on processing algorithm and their degree of reliability.
Article Preview
Top

Emotion Recognition System Architecture

The analysis of existing emotion recognition systems reveals a decomposition into three levels, each fulfilling a specific function: Capture, analysis and interpretation levels. Figure 1 shows the emotion recognition system architecture.

Figure 1.

Emotion recognition system architecture

IJRAT.2018010101.f01

At the capture level, the information is captured from the real world and in particular from the user through devices (camera, microphone, etc.). This information is then analyzed in the analysis level, where emotionally relevant characteristics are extracted from the captured data. Finally, the extracted characteristics are interpreted to obtain an emotion. This division into three level - capture, analysis and interpretation - is classic in emotions recognition and form a functional motif on which we rely to develop a model.

This architecture model offers five component types (Figure 2). Each component subscribes and issue one or more data stream. The capture unit has the role of interfacing with a physical device for capturing data. The feature extractor analyzes input data in order to extract one or more emotionally relevant characteristics. An interpreter receives the values of several characteristics. Its role is to interpret emotion. This interpretation is subject to the emotion model considered (discrete model, continuous, or componential) as well as the computer algorithm used (e.g., neural network, hidden Markov model, etc.).

Complete Article List

Search this Journal:
Reset
Open Access Articles
Volume 6: 2 Issues (2018)
Volume 5: 2 Issues (2017)
Volume 4: 2 Issues (2016)
Volume 3: 2 Issues (2015)
Volume 2: 2 Issues (2014)
Volume 1: 2 Issues (2013)
View Complete Journal Contents Listing