Facial Gesture Recognition

Facial Gesture Recognition

Daijin Kim, Jaewon Sung
Copyright: © 2009 |Pages: 8
DOI: 10.4018/978-1-60566-216-9.ch007
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

From facial gestures, we can extract many kinds of messages in human communication: they represent visible speech signals and clarify whether our current focus of attention is important, funny or unpleasant for us. They are direct, naturally preeminent means for humans to communicate their emotions (Russell and Fernandez-Dols, 1997). Automatic analyzers of subtle facial changes, therefore, seem to have a natural place in various vision systems including automated tools for psychological research, lip reading, bimodal speech analysis, affective computing, face and visual-speech synthesis, and perceptual user interfaces.
Chapter Preview
Top

7.1 Hidden Markov Model

An HMM is a statistical modeling tool which is applicable to analyzing time-series with spatial and temporal variability (Lee et. al., 1999; Jordan, 2003; Duda et al, 2000). It is a graphical model that can be viewed as a dynamic mixture model whose mixture components are treated as states. It has been applied in classification and modeling problems such as speech or gesture recognition. Figure 1 illustrates a simple HMM structure.

Figure 1

HMM structure

978-1-60566-216-9.ch007.f01

The hidden Markov model (HMM) is extension of a Markov model, where each state generates an observation. We can extend the concept of Markov models to include the case where the observation is a probabilistic function of the state. The resulting model, called a hidden Markov model, is a doubly embedded stochastic process with an underlying stochastic process that is not observable, but can only be observed through another set of stochastic processes that produce the sequence of observations. The HMM model is usually exploited to investigate the time varying sequence of observations and is regarded as a special case of a Bayesian belief network because it can be used for a probabilistic model of causal dependencies between different states (Gong et. al., 2000). Figure 2 illustrates a 5-state (1-D) HMM used for face modelling.

Figure 2

An illustration of 1-D HMM with 5 states for face modeling

978-1-60566-216-9.ch007.f02

The HMM is defined by specifying the following parameters (Rabiner, 1989):

  • N: The number of states in the model. The individual states are denoted as S={S1,S2,⋯,SN} and the state of the model at time t is qt, qtS and 1≤tT, where T is the length of the output observable symbol sequence.

  • M: The number of distinct observable symbols. The individual symbols are denoted as V={v1,v2,⋯,vM}.

  • AN×N: An N×N matrix specifies the state-transition probability that the state will transit from state Si to state Sj. AN×N=[aij]1≤i,jN, where aij=P(qt+1=Sj|qt=Si.

  • BN×M: An N×M matrix specifies that the the system will generate the observable symbol vk at state Sj and at time t. BN×M=[bj(k)]1≤jN,1≤kM, where bj(k)=P(vkatt|qt=Sj).

  • πN: An N-element vector that indicates the initial state probabilities. πN=[πi]1≤iN, where πi=P(qt=Si).

Complete Chapter List

Search this Book:
Reset