AI-Based Emotion Recognition

AI-Based Emotion Recognition

Mousami Prashant Turuk, Sreemathy R., Shardul Sandeep Khandekar, Soumya Sanjay Khurana
Copyright: © 2023 |Pages: 21
DOI: 10.4018/978-1-7998-9220-5.ch049
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Behaviors, actions, pose, facial expressions, and speech are considered as channels that convey human emotions. Extensive research has been carried out to explore the relationships between these channels and emotions. The proposed method consists of a neural network-based solution combined with image processing and speech processing to classify the universal emotions: happy, anger, sad, and neutral. Speech processing includes extraction of spectral and temporal features like MFCC, energy, and then a set of values is given as input to the neural network. In image processing, Gabor filter texture features are used to extract a set of selected feature points. Mutual information is calculated and given as an input to the neural network for classification. The experimental results demonstrate the efficacy of audio-visual cues especially using few prominent features as overall accuracy of the combined approach is above 85%.
Chapter Preview
Top

Introduction

Due to the recent advancements in technology, humans can interact with computers in ways that were previously unimaginable. Human-computer interaction is a multi-disciplinary field that focuses on designing computer technology to ease the interaction between computers and humans. New modalities such as voice and gestures are used to interact with computers that extend the traditional methods confined to keyboard and mouse. For human-to-human communication, voice and vision play a significant role. Thus, it is desirable for computers to comprehend the environment from visual as well as audio cues. This desire is supported by the growth in computer vision, natural language processing and the era of machine learning and deep learning which has helped to model the real world. Machine learning has provided a means for machines to extract useful information from images as well as speech. Various machine learning applications like image classification, image segmentation, object detection, text understanding and pattern recognition are being used on a day-to-day basis. Even with such advancements in the field, machines still fail to understand the ‘emotion’ of the person and this might lead to a failure in understanding the context provided entirely. In the current era of Industry 4.0, due to the availability of huge amounts of data, industries in every field are using artificial intelligence to tackle the problem of pattern recognition. Emotion is a mental or psychological state which is mainly associated with feelings, thought process and behavior of humans. Emotional state of a person conveys not only his mood but also his personality. Humans are able to exchange information through multiple domains like speech, text and visual images. In verbal communication, the same word expressed in different emotions can convey different meanings. Identification of emotional states using only audio cues is hence inadequate and needs to be in fusion with visual cues. This chapter aims to analyze and present a unified approach for audio-visual emotion recognition based on back propagation algorithm.

Emotion is a concept involving three components:

  • Subjective experience.

  • Expressions (audio-visual: face, gesture, posture, voice intonation, breathing noise).

  • Biological arousal (heart rate, respiration frequency/intensity, perspiration, temperature, muscle tension, brain wave).

After recognizing universality within emotions despite the cultural differences, (Ekman et al., 1978). classified six emotional expressions to be universal: happiness, sadness, anger, disgust, surprise and fear.

Computer vision techniques have enabled the computer to understand the environment. Interacting with computers in voice and gesture modalities is much more natural for people, and the progression is towards the kind of interaction between humans. Despite these advances, one necessary ingredient for natural interaction is still missing, that is emotions. Emotions play an important role in human-to-human communication and interaction, allowing people to express them beyond the verbal domain. The ability to understand human emotions is desirable for the computer in some applications such as improving driver safety, medical conditions and lie detection. This chapter recognizes human emotions based on audio-visual cues.

Key Terms in this Chapter

PSD: The power spectral density (PSD) of the signal describes the power present in the signal as a function of frequency, per unit frequency.

HCI: Human-computer interaction (HCI) is a multidisciplinary field of study that focuses on the design of computer technology and the interaction between humans and computers.

MFCC: Mel-Frequency Cepstrum (MFC) is a representation of the short-term power spectrum of a sound, based on a linear cosine transform of a log power spectrum on a nonlinear Mel scale of frequency.

ZCR: The zero-crossing rate (ZCR) is the rate at which a signal changes from positive to zero to negative or from negative to zero to positive.

FFT: The Fast Fourier Transform (FFT) is obtained by decomposing a sequence of values into components of different frequencies.

Back-Propagation: Backpropagation is an algorithm to fine-tune weights and biases to improve the accuracy and reduce error for the artificial neural network outputs.

Complete Chapter List

Search this Book:
Reset