Compound Facial Expression Recognition and Pain Intensity Measurement Using Optimized Deep Neuro Fuzzy Network

Compound Facial Expression Recognition and Pain Intensity Measurement Using Optimized Deep Neuro Fuzzy Network

Rohan Appasaheb Borgalli, Sunil Surve
Copyright: © 2022 |Pages: 27
DOI: 10.4018/IJSIR.304721
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The automatic measurement of pain intensity from facial expressions, mainly from face images describes the patient’s health. Hence, a robust technique, named Water Cycle Henry Gas Solubility Optimization-based Deep Neuro Fuzzy Network (WCHGSO-DNFN) is designed for compound FER and pain intensity measurement. However, the proposed WCHGSO is the incorporation of Water Cycle Algorithm (WCA) with Henry Gas Solubility Optimization (HGSO). Here, Compound Facial Expressions of Emotion Database (dataset-2) is made to perform compound FER, whereas the input image from UNBC pain intensity dataset (dataset-1) is utilized to measure the pain intensity, and the processes are performed separately. The developed technique achieved better performance with respect to testing accuracy, sensitivity, and specificity with the highest values of 0.814, 0.819, and 0.806 using dataset-1, whereas maximum values of 0.815, 0.758 and 0.848 is achieved using dataset-2.
Article Preview
Top

1. Introduction

Facial expressions are the influential, normal, and the universal signal for the human beings for expressing their emotional states. Facial expressions play a significant role in interpersonal relations and security purposes (Alazeez et al., 2016). Several researchers had been done on automated facial impression assessment, because of its realistic significance in medicinal diagnosis, driver tiredness monitoring, affable robotics and another various modes related to human-computer interactions. In the field of computer vision and machine learning, numerous FER systems have been explored mainly for encoding the expression data from the facial representations (Li & Deng, 2020). Besides, automatic FER is desirable for a variety of applications, namely human behaviour understanding, interactive computer games, human-computer interaction, and perceptual user interface. In an automatic FER system, localization or face detection in a cluttered view is generally the first stage. Thereafter, the significant features are extracted from the faces, and lastly the expression is categorized on the basis of the extracted features (Yu & Bhanu, 2006). Facial expressions play a prominent role in daily life of humans. Additionally, automatic facial expression assessment is a vital field of artificial intelligence. Because of its prospective applications in diverse areas, like service robots, intelligent tutoring systems, FER has gained rapid attraction in the computer vision community in recent times. The major confront of FER come from variant poses, insufficient qualitative data, occlusions, identity bias, and illumination variation. Pose variations and occlusions change the facial appearance considerably, and are considered as the two main obstacles for automatic FER. Although, automatic FER has created considerable advancement in the ancient times, occlusion-robust and pose-invariant problem of FER have achieved comparatively fewer interest, particularly in the real-world circumstances (Wang et al., 2020).

The systems based on FER could be separated into duo different categorization on basis of their feature representations, namely dynamic sequence FER and static image FER. The dynamic-driven approaches define the temporal relation between the contiguous frames in the sequence of input facial expression, whereas the static image FER encodes the feature representation only with spatial information of single image. Based on these two visions enabled techniques, other modalities, like physiological and audio channels have also been employed in the multimodal systems in order to support the FER (Li & Deng, 2020). Automatic FER consists of triple key modules where, primary module is to find and record facial regions in facial images. Secondary module extracts and represents the facial variations produced by facial expressions. A last module considers formerly extracted feature vectors as an input in order to perform the categorization process using machine learning-based methods. Moreover, there are two different methods based on handcrafted features, namely the techniques related to appearance features and techniques related to geometric features. However, the appearance features specifies the skin textures, namely furrows and wrinkles (Slimani et al., 2019). Automated facial expression analysis is an exciting task that has gained attention in the modern years, because of its prospective applicability in different areas, like data-driven animation, human computer interaction, and tailored applications for customer products. Furthermore, a proficient and a successful feature representation were derived to diminish the within-class variations whilst enhancing the between-class variations is the primary component for any booming FER system (Ahmed et al., 2014).

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 3 Issues (2023)
Volume 13: 4 Issues (2022)
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing