Emotion-Based Human-Computer Interaction

Emotion-Based Human-Computer Interaction

Sujigarasharma K., Rathi R., Visvanathan P., Kanchana R.
DOI: 10.4018/978-1-6684-5673-6.ch009
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

One of the important aspects of human-computer interaction is the detection of emotions using facial expressions. Emotion recognition has problems such as facial expressions, variations of posture, non-uniform illuminations, and so on. Deep learning techniques becomes important to solve these classification problems. In this chapter, VGG19, Inception V3, and Resnet50 pre-trained networks are used for the transfer learning approach to predict human emotions. Finally, the study achieved 98.32% of accuracy for emotion recognition and classification using the CK+ dataset.
Chapter Preview
Top

The authors (Ozdemir et al., 2019) suggested a LeNet architecture-based facial detection system. This study makes use of a combined KDEF and JAFFE dataset The Haar cascade package is being used to filter the emotion recognition. This task was accomplished with an accuracy of 95.40%.

The authors (Jyostna & Veeranjaneyulu, 2019) demonstrated how to deal with different situations using a CNN. VGG16 and SVM classifier is deployed for extracting features. The algorithm had an 82.27% of accuracy without face detection and 87.16% of accuracy with face detection on the CK+ database. The author (Fan et al., 2018) presented recognising emotional expressions for the multi-region CNN method, as indicated in this paper. The sub-networks provided the attributes derived from the eyes, mouth and nose. To estimate emotions, the ratings over the sub-networks are integrated.

In this article (Wang et al., 2019) collect the most number of data. Here they used FER2013, CK +, JAFFE and SFEW datasets to test the model. The databases RAF- DB and AFEW 7.0 have been used in this study. The authors (Sreelakshmi & Sumithra, 2019) created an emotion identification system based on the MobileNet V2 architecture. The model is evaluated on real-time images and obtains an accuracy of 90.15%. Resnet50 and VGG16 facial expression recognition were exhibited as the state of the science.

Complete Chapter List

Search this Book:
Reset