# An Efficient and Robust Technique for Facial Expression Recognition Using Modified Hidden Markov Model

Mayur Rahul (AKTU, Lucknow, India), Pushpa Mamoria (Department of Computer Applications, UIET, CSJMU, Kanpur, India), Narendra Kohli (Department of Computer Science & Engineering, HBTU, Kanpur, India) and Rashi Agrawal (Department of IT, UIET, CSJMU, Kanpur, India)
DOI: 10.4018/IJAEC.2018070102
Available
\$37.50
No Current Special Offers

## Abstract

Partition-based feature extraction is widely used in the pattern recognition and computer vision. This method is robust to some changes like occlusion, background, etc. In this article, a partition-based technique is used for feature extraction and extension of HMM is used as a classifier. The new introduced multi-stage HMM consists of two layers. In which bottom layer represents the atomic expression made by eyes, nose and lips. Further, the upper layer represents the combination of these atomic expressions such as smile, fear, etc. Six basic facial expressions are recognized, i.e. anger, disgust, fear, joy, sadness and surprise. Experimental results show that the proposed system performs better than normal HMM and has an overall accuracy of 85% using the JAFFE database.
Article Preview
Top

## 2. Basics Of Hmm

Markov process is used for time series data modelling. This model is applicable for situations where there is a dependability of present state to previous state. For example, bowler bowls the ball in cricket game; ball first hit the ground then hit the bat or pad. This situation can be easily modelled by time series data modelling. In other example such as in a spoken sentence, the present pronounced word is depending on the previous pronounced word. Markov process effectively handles such situations (see Figure 1).

Figure 1.

A Markov Model for 3 observations variables

When an observation variable is dependent only on previous observation variable in a markov model is called first order markov chain. The joint probability distribution is given byp(X1, X2,----,XN) = p(X1) Л p(XN|XN-1)(1) where p(X1, X2,------,XN) is the joint probability distribution of states X1, X2,----,XN and p(X1) is the probability of state X1 and p(XN|XN-1) is the probability of state XN given state XN-1 .

## Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 13: 4 Issues (2022): Forthcoming, Available for Pre-Order
Volume 12: 4 Issues (2021): 2 Released, 2 Forthcoming
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing