A Fuzzy Logic Approach in Emotion Detection and Recognition and Formulation of an Odor-Based Emotional Fitness Assistive System

A Fuzzy Logic Approach in Emotion Detection and Recognition and Formulation of an Odor-Based Emotional Fitness Assistive System

Sudipta Ghosh, Debasish Kundu, Gopal Paul
Copyright: © 2015 |Pages: 21
DOI: 10.4018/IJSE.2015070102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This paper aims at a Fuzzy relational approach for similar emotions expressed by different subjects by facial expressions and predefined parameters. Different Facial attributes contribute to a wide variety of emotions under varied circumstances. These same features also vary widely from person to person, introducing uncertainty to the process. Facial features like eye-opening, mouth-opening and length of eye-brow constriction from localized areas from a face are Fuzzified and converted into emotion space by employing relational models. This is dealt with Fuzzy Type-2 logic, which reigns supreme in reducing uncertainty.
Article Preview
Top

1. Introduction

Emotions play a vital role in inter-personnel communication. It is an aspect that is universal to all cultures around the world, irrespective of the languages or cultures which are highly variant over cultures. It is also inherent to humans and is a reflexive action and is harder to fake. Hence, emotion modeling and automatic recognition is an efficient way of both providing assistive technology and making prediction systems. Emotion recognition consists of various individual components like facial expressions, voice, postures, gesture clusters, etc. Whatever may the approach be, a robust recognition (Moral, 2014; Vries, 2015; Hudlicka, 2011; Nair, Godfrey & Kim, 2011) can be reduced to a two-step process: Feature Extraction and Classification.

Features can be defined as a set of independent features or attributes which defines, in this case, a given emotional expression. Whereas, Classification is the process of mapping those extracted features onto the various emotional expressions classes. Feature selection and classifier design are one of the main stepping stone to a robust emotion recognition system. A well-developed classification algorithm sometimes cannot produce high level of accuracy due to obsolete selected features. Hence it can be inferred that by concentrating only on those features, a system can robustly classify a person’s emotion. For this very reason we proceed on to extracting 3 primary features from the images, namely such as mouth-opening (MO), eye-opening (EO), and the length of eyebrow-construction (EBC). Identification of facial expressions by pixel wise analysis of images is both tedious and time consuming. We extracted significant components of facial expressions through segmentation the image. Because of the difference in the regional profiles on an image, simple segmentation algorithms, such as histogram based thresholding technique, do not always yield good results. After several experiments, it has been observed for the segmentation of the mouth-region, a colour-sensitive segmentation algorithm is most appropriate. Further, because of apparent non-uniformity in the lip colour profile, a fuzzy segmentation algorithm is preferred. Taking into account of the above viewpoints, a colour-sensitive fuzzy K-Means clustering algorithm has been selected for the segmentation of the mouth region.

However, using a large set of attributes to classify emotional expressions also brings down the accuracy and performance of the algorithm. Commonly used feature selection techniques in emotion recognition include Fourier descriptors (Uwechue & Pandya, 1997), rough sets (Wu & Wu, 2009), Gabor filter (Chang, Li, Chung, Kuo, & Tu, 2010) and Neural Net based mapping (Bhavsar & Patel, 2005; Guo, Yimo, Gao, & Huanping, 2006), (Sun, Li, Z, & Tang, 2009), Support Vector Machine (Hui & Wang, 2007) and Fuzzy Relational Approach (Chakraborty& Konar, Chakraborty, & Chatterjee, 2009). A brief literary overview of the research and development on emotion recognition is given below. Kobiyashi & Mara (Kobayashi & Hara, 1993; Kobayashi & Hara, 1993; Kobayashi & Hara, 1993) and Kawakami et al. (Kobayashi & Hara, 1993) designed a technique to recognize facial expressions using well defined Neural Networks (Kawakami, Morishima, Yamada, & Harashima, 1994; Scheutz, 2011; Ahmed, Dey, Ashour, Sifaki-Pistolla, Bălas-Timar, & Balas, 2016). This technique is capable of recognizing five common facial expressions namely, sadness, happiness, anger, fear and disgust. Ekman & Friesen worked on the theory of identifying facial expressions from movement of cheeks, chin and wrinkles produced during facial expressions. An alternative theory was produced by Yamada to attain emotion recognition by visual information classification).

Complete Article List

Search this Journal:
Reset
Volume 11: 2 Issues (2020)
Volume 10: 2 Issues (2019)
Volume 9: 2 Issues (2018)
Volume 8: 2 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2015)
Volume 5: 2 Issues (2014)
Volume 4: 2 Issues (2013)
Volume 3: 2 Issues (2012)
Volume 2: 2 Issues (2011)
Volume 1: 2 Issues (2010)
View Complete Journal Contents Listing