Article Preview
Top1. Introduction
“2018 is the year when machines learn to grasp human emotions” is a famous quote by Andrew Moore, dean of computer science at Carnegie Mellon. With the advent of modern technology, our desires went high and it binds no bounds. In the present decades enormous research works are taking place in the fields of digital image and image processing. Image Processing is a vast area of research in present day world and its applications are very widespread. One of the most important application of Image processing is Facial expression recognition. Our emotions are revealed by the expressions in our face. Facial Expressions plays an important role in interpersonal communication. Facial expression is a non-verbal scientific gesture which gets expressed in our face as per our emotions (Dai et. al., 2019).
Automatic recognition of facial expression plays an important role in artificial intelligence and robotics and thus it is a need of the generation. Some application related to this includes Personal identification and Access control, Videophone and Teleconferencing, Forensic application, Human-Computer Interaction, Automated Surveillance, Cosmetology and so on.
The objective of this work is to enhance the automatic facial expression recognition which can take human facial images containing some expression as input and recognize and classify it into seven different expression classes such as neutral, angry, disgust, fear, happy, sadness and surprise with improved accuracy compared to the other available systems.
Human facial expressions can be easily classified into 7 basic emotions: happy, sad, surprise, fear, anger, disgust, and neutral (Dai et. al., 2019). Our facial emotions are expressed through activation of specific sets of facial muscles. These sometimes subtle, yet complex, signals in an expression often contain an abundant amount of information about our state of mind. Through facial emotion recognition, we are able to measure the effects that content and services have on the audience/users through an easy and low-cost procedure. For example, retailers may use these metrics to evaluate customer interest. Healthcare providers can provide better service by using additional information about patients' emotional state during treatment. Entertainment producers can monitor audience engagement in events to consistently create desired content.
Humans are well-trained in reading the emotions of others, in fact, at just 14 months old, babies can already tell the difference between happy and sad. To answer the question, in this work a deep learning neural network is devised that gives machines the ability to make inferences about our emotional states. As shown in Figure 1 such a facial expression recognition process involved the following steps: pre-processing of input images, detecting the face, extracting the facial features, and then finally classifying the emotions.
Figure 1.
Steps in Facial Expression Recognition
Thus facial expression recognition process comprises of:
- 1.
Locating faces in the scene also named as face detection (Dai et. al., 2019);
- 2.
Extracting facial features from the detected face region e.g., detecting the shape of facial components or describing the texture of the skin in a facial area; this step is referred to as facial feature extraction;
- 3.
Analyzing the motion of facial features and/or the changes in the appearance of facial features and classifying this information into some facial-expression- interpretative categories such as facial muscle activations like smile or frown, emotion (affect) categories like happiness or anger, attitude categories like, liking, disliking and ambivalence. (Ismail et. al., 2019). This step is also referred to as facial expression interpretation.