Automatic Multiface Expression Recognition Using Convolutional Neural Network

Automatic Multiface Expression Recognition Using Convolutional Neural Network

Padmapriya K.C., Leelavathy V., Angelin Gladston
DOI: 10.4018/IJAIML.20210701.oa8
Article PDF Download
Open access articles are freely available for download

Abstract

The human facial expressions convey a lot of information visually. Facial expression recognition plays a crucial role in the area of human-machine interaction. Automatic facial expression recognition system has many applications in human behavior understanding, detection of mental disorders and synthetic human expressions. Recognition of facial expression by computer with high recognition rate is still a challenging task. Most of the methods utilized in the literature for the automatic facial expression recognition systems are based on geometry and appearance. Facial expression recognition is usually performed in four stages consisting of pre-processing, face detection, feature extraction, and expression classification. In this paper we applied various deep learning methods to classify the seven key human emotions: anger, disgust, fear, happiness, sadness, surprise and neutrality. The facial expression recognition system developed is experimentally evaluated with FER dataset and has resulted with good accuracy.
Article Preview
Top

1. Introduction

“2018 is the year when machines learn to grasp human emotions” is a famous quote by Andrew Moore, dean of computer science at Carnegie Mellon. With the advent of modern technology, our desires went high and it binds no bounds. In the present decades enormous research works are taking place in the fields of digital image and image processing. Image Processing is a vast area of research in present day world and its applications are very widespread. One of the most important application of Image processing is Facial expression recognition. Our emotions are revealed by the expressions in our face. Facial Expressions plays an important role in interpersonal communication. Facial expression is a non-verbal scientific gesture which gets expressed in our face as per our emotions (Dai et. al., 2019).

Automatic recognition of facial expression plays an important role in artificial intelligence and robotics and thus it is a need of the generation. Some application related to this includes Personal identification and Access control, Videophone and Teleconferencing, Forensic application, Human-Computer Interaction, Automated Surveillance, Cosmetology and so on.

The objective of this work is to enhance the automatic facial expression recognition which can take human facial images containing some expression as input and recognize and classify it into seven different expression classes such as neutral, angry, disgust, fear, happy, sadness and surprise with improved accuracy compared to the other available systems.

Human facial expressions can be easily classified into 7 basic emotions: happy, sad, surprise, fear, anger, disgust, and neutral (Dai et. al., 2019). Our facial emotions are expressed through activation of specific sets of facial muscles. These sometimes subtle, yet complex, signals in an expression often contain an abundant amount of information about our state of mind. Through facial emotion recognition, we are able to measure the effects that content and services have on the audience/users through an easy and low-cost procedure. For example, retailers may use these metrics to evaluate customer interest. Healthcare providers can provide better service by using additional information about patients' emotional state during treatment. Entertainment producers can monitor audience engagement in events to consistently create desired content.

Humans are well-trained in reading the emotions of others, in fact, at just 14 months old, babies can already tell the difference between happy and sad. To answer the question, in this work a deep learning neural network is devised that gives machines the ability to make inferences about our emotional states. As shown in Figure 1 such a facial expression recognition process involved the following steps: pre-processing of input images, detecting the face, extracting the facial features, and then finally classifying the emotions.

Figure 1.

Steps in Facial Expression Recognition

IJAIML.20210701.oa8.f01

Thus facial expression recognition process comprises of:

  • 1.

    Locating faces in the scene also named as face detection (Dai et. al., 2019);

  • 2.

    Extracting facial features from the detected face region e.g., detecting the shape of facial components or describing the texture of the skin in a facial area; this step is referred to as facial feature extraction;

  • 3.

    Analyzing the motion of facial features and/or the changes in the appearance of facial features and classifying this information into some facial-expression- interpretative categories such as facial muscle activations like smile or frown, emotion (affect) categories like happiness or anger, attitude categories like, liking, disliking and ambivalence. (Ismail et. al., 2019). This step is also referred to as facial expression interpretation.

Complete Article List

Search this Journal:
Reset
Volume 13: 1 Issue (2024)
Volume 12: 2 Issues (2022)
Volume 11: 2 Issues (2021)
Volume 10: 2 Issues (2020)
Volume 9: 2 Issues (2019)
View Complete Journal Contents Listing