Emotion Recognition With Facial Expression Using Machine Learning for Social Network and Healthcare

Emotion Recognition With Facial Expression Using Machine Learning for Social Network and Healthcare

Anju Yadav, Venkatesh Gauri Shankar, Vivek Kumar Verma
Copyright: © 2020 |Pages: 10
DOI: 10.4018/978-1-7998-3095-5.ch012
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this chapter, machine learning application on facial expression recognition (FER) is studied for seven emotional states (disgust, joy, surprise, anger, sadness, contempt, and fear) based on FER describing coefficient. FER has many practical importance in various area like social network, robotics, healthcare, etc. Further, a literature review of existing machine learning approaches for FER is discussed, and a novel approach for FER is given for static and dynamic images. Then the results are compared with the other existing approaches. The chapter also covers additional related issues of applications, various challenges, and opportunities in future FER. For security-based face detection systems that can identify an individual, in any form of expression he introduces himself. Doctors will use this system to find the intensity of illness or pain of a deaf and dumb patient. The proposed model is based on machine learning application with three types of prototypes, which are pre-trained model, single layer augmented model, and multi-layered augmented model, having a combined accuracy of approx. 99%.
Chapter Preview
Top

Introduction

Facial expression recognition have various applications in different areas of social networks and in other fields as well (Tian et.al., 2001; Mao et.al., 2015). The prime objective of the model is to be able to identify the emotions of the subjects based on facial expressions with a good accuracy. The models that exist today give lower accuracies when it comes to identifying angry and fearful expressions (Li et.al., 2013b). This chapter will be focusing on improving the accuracies of all the scheduled emotions including anger and fear from users facial expression and thus we aim to identify whether to patient is satisfied with the assigned doctor or not.

There are a total of 7 defined classes for the expressions which are shown as follows:

Class 0: Angry, Class 1: Disgust, Class 2: Fear, Class 3: Happy, Class 4: Sad,Class 5: Surprise Class 6: Neutral (Ratliff &Patterson, 2008b).

Figure 1.

defined classes for the expressions

978-1-7998-3095-5.ch012.f01

Motivation and Problem Statement:

Emotion recognition using facial detection has got valuable attention because of its many applications in marketing, call center systems, employee satisfaction management system etc (Ogiela & Tadeusiewicz, 2008a). Despite significant progress, state of the art emotion detection systems only yield suitable enactment under measured circumstances and significantly degrade when faced with real-world applications (Koelstra & Patras, 2013a). That’s why AI and machine learning are rising and in demand these days, we wanted to do a project in this area. Also, as far as PSMS is concerned patients usually hesitate to tell whether they are satisfied with their doctor or not. So this will help to correct the problem.

Objectives

Facial expressions having a vital role in emotional perception and are imported in the nonverbal interaction system as well as in the identification of individuals. In everyday emotional interaction, they are very necessary, just next to the tone of voice. Therefore, our research aims to read the emotion of the user from his/her facial expression. PSMS is patient management satisfaction system in which we aim to identify whether the patient is happy from his/her assigned doctor.

Top

Background:

Conceptual Overview

A convolutional neural network (CNN or ConvNet) is a type of neural networks, usually used for analysis of visual images.

CNNs are multilayer perceptron's regularized models. Multilayer perceptron usually refers to networks that are entirely connected, i.e. each neuron in one layer is linked to all neurons in the next layer. These networks “full connectivity” makes them predisposed to over-fitting data. Many types of regularization adding to the loss function some form of weight measurement of magnitude (Sankar et.al., 2018a). However, CNNs take a different approach to regularization: using smaller and simpler patterns, they take advantage of the data hierarchy arrangement and accumulate more complex patterns (Devi et.al., 2020a).

Therefore, CNNs are at the lower extreme on the scale of connectivity and complexity. These are also known as Artificial Neural Networks (SIANN) shift invariant or space invariant, based on the characteristics of their mutual invariance in design and translation. CNNs were encouraged by natural processes in that the pattern of connectivity between neurons is similar to the visual cortex of the animal. Many restricted region of the pictorial field known as the receptive field, different cortical neurons respond to incitements. Different neurons ' receptive fields partially overlap to concealment the whole field of vision.

Comparison with the other algorithms for object identification, CNNs use very little pre-processing. This ensures that the network knows about the filters in conventional algorithms that were hand-crafted. An important advantage in feature development is this freedom from previous knowledge and human effort.

Complete Chapter List

Search this Book:
Reset