Face Recognition and Semantic Features

Face Recognition and Semantic Features

Huiyu Zhou (Brunel University, UK), Yuan Yuan (Aston University, UK) and Chunmei Shi (People’s Hospital of Guangxi, China)
Copyright: © 2009 |Pages: 19
DOI: 10.4018/978-1-60566-188-9.ch003
OnDemand PDF Download:
List Price: $37.50


The authors present a face recognition scheme based on semantic features’ extraction from faces and tensor subspace analysis. These semantic features consist of eyes and mouth, plus the region outlined by three weight centres of the edges of these features. The extracted features are compared over images in tensor subspace domain. Singular value decomposition is used to solve the eigenvalue problem and to project the geometrical properties to the face manifold. They compare the performance of the proposed scheme with that of other established techniques, where the results demonstrate the superiority of the proposed method.
Chapter Preview


F ace recognition and modeling is a vital problem of prime interest in computer vision. Its applications have been commonly discovered in surveillance, information retrieval and human-computer interface. For decades studies on face recognition have addressed the problem of interpreting faces by machine, their efforts over time leading to a considerable understanding of this research area and rich practical applications. However, in spite of their impressive performance, the established face recognition systems to some extent exhibit deficiency in the cases of partial occlusion, illumination changes, etc. This is due to the fact that these systems mainly rely on the low-level attributes (e.g. color, texture, shape, and motion), which may change significantly and then lose effectiveness in the presence of image occlusion or illumination variations.

Classical image-based face recognition algorithms can be categorised into appearance- and model-based. The former normally consists of linear (using basis vectors) and non-linear analysis. These approaches represent an object using raw intensity images, being considered as high-dimensional vectors. For example, Beymer (Beymer, 1993) described a pose estimation algoithm to align the probe images to candidate poses of the gallery subjects. Pentland et al. (Pentland et al, 1994) compared the performance of a parametric eigenspace with view-based eigenspaces. The latter includes 2-D or 3-D model based schemes, where the facial variations with prior knowledge are encoded in a model to be constructed. Examples can be found in (Cootes et al, 2002; Lanitis, et al, 1997; Romdhani et al, 1999).

As one of the linear appearance algorithms, the well-known Eigenface algorithm (Turk & Pentland, 1991) uses the principal component analysis (PCA) for dimensionality reduction in order to find the best vectorised components that represent the faces in the entire image space. The face vectors are projected to the basis vectors so that the projection coefficients are used as the feature representation of each face image (Turk & Pentland, 1991). Another example of the linear appearance approaches is the application of Independent component analysis (ICA). ICA is very similar to PCA except that the distribution of the components is assumed to be non-Gaussian. One of these ICA based algorithms is the FastICA scheme that utilised the InfoMax algorithm (Draper et al, 2003). The Fisherface algorithm (Belhumeur et al, 1996), derived from the Fisher Linear Discriminant (FLD), defines different classes with different statistics. Faces with similar statistics will be grouped together by FLD rules. Tensorface (Vasilescu & Terzopoulos, 2003) recruits a higher-order tensor to describe the set of face images and extend singular value decomposition (SVD) to the higher-order tensor data. Non-linear appearance algorithms, such as principal component analysis (KPCA) (Yang, 2002), ISOMAP (Tenebaum et al, 2000) and Local Linear Embedding (LLE) (Roweis & Saul, 2000), have much more complicated process than the linear ones. Unlike the classical PCA, KPCA uses more eigenvector projections that the input dimensionality. Meanwhile, ISOMAP and LLE have been well established with stable topologically rendering capability.

Model-based face recognition normally contains three steps: model construction, model fitting to the face images, and similarity check by evaluation of the model parameters. An Active Appearance Model (AAM) is a statistical model that integrates shape variations with the appearance in a shape-normalized frame (Edwards et al, 1998). Model parameters are rendered so that the difference between the synthesized model and the face image can be minimized. Face matches will be found after this minimisation has been reached. 3-D facial information can be used to better describe the faces in the existence of illumination and pose changes, where 2-D descriptors sometimes turn out to be less effective. One example is reported in (Blanz et al, 2002). In this work, a 3-D morphable face model fusing shape and texture was proposed, and an algorithm for extracting the model parameters was established as well.


1 Examples of face images in ORL database.

Complete Chapter List

Search this Book: