A Lyapunov Theory-Based Neural Network Approach for Face Recognition

A Lyapunov Theory-Based Neural Network Approach for Face Recognition

Li-Minn Ang, King Hann Lim, Kah Phooi Seng, Siew Wen Chin
DOI: 10.4018/978-1-60566-798-0.ch002
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter presents a new face recognition system comprising of feature extraction and the Lyapunov theory-based neural network. It first gives the definition of face recognition which can be broadly divided into (i) feature-based approaches, and (ii) holistic approaches. A general review of both approaches will be given in the chapter. Face features extraction techniques including Principal Component Analysis (PCA) and Fisher’s Linear Discriminant (FLD) are discussed. Multilayered neural network (MLNN) and Radial Basis Function neural network (RBF NN) will be reviewed. Two Lyapunov theory-based neural classifiers: (i) Lyapunov theory-based RBF NN, and (ii) Lyapunov theory-based MLNN classifiers are designed based on the Lyapunov stability theory. The design details will be discussed in the chapter. Experiments are performed on two benchmark databases, ORL and Yale. Comparisons with some of the existing conventional techniques are given. Simulation results have shown good performance for face recognition using the Lyapunov theory-based neural network systems.
Chapter Preview
Top

Introduction

Automatic recognition of human faces in dynamic environments has gained a great deal of interest from the communities of image processing, pattern recognition, neural network, biometric and computer vision in the past couple of decades. The active research in face recognition has stimulated the rapid development of numerous applications, including access control, human computer interfaces, security and surveillance, e-commerce, etc. In this chapter, we first give an overview of face recognition. In general, research on face recognition can be grouped into two categories: (i) feature-based approaches and (ii) holistic approaches. Feature-based approaches extract local features such as eyes, nose, mouth and so on to perform spatial face recognition while holistic approaches match the faces as a whole for recognition (Mian, 2007).

Recently, artificial neural networks (ANNs) have been widely applied in face recognition for the reason that neural network based classifiers can incorporate both statistical and structural information to achieve better performance than the simple minimum distance classifiers (Chellappa et al., 1995). There are two structures of ANNs commonly in use, namely Multilayered neural network (MLNN) and Radial Basis Function neural network (RBF NN). MLNNs are popular in face recognition due to their good learning generalization for complex problems (Jain et. al., 1999). The conventional training of MLNNs is mainly based on optimization theory. In order to search an optimal solution for MLNNs, a number of weight updating algorithms have been developed. The gradient-based backpropagation (BP) training algorithms are widely used (Valentin et al., 1994). It is well-known that gradient-based BP training algorithms may have a slow convergence in practice. The search for the global minimum point of a cost function may be trapped at local minima during gradient descent. Furthermore, the global minimum point may not be found, if the MLNN has large bounded input disturbances. Therefore, fast error convergence and strong robustness of the MLNN with the gradient-based BP algorithms may not be guaranteed.

Alternatively, RBF NNs have been applied to many engineering and scientific applications including face recognition (Er et al., 2002; Yang & Paindavoine, 2003). RBF NNs possess the following significant properties: (i) universal approximators (Park & Wsandberg, 1991), and (ii) simple topological structure (Lee & Kil, 1991) which allows straightforward computation using a linearly weighted combination of single hidden-layer neurons. Due to the linear-weighted combiner, the weights can be determined using least mean square (LMS) and recursive least square (RLS) algorithms. However, these algorithms suffer from several drawbacks and limitations. LMS is highly dependent on the autocorrelation function associated with the input signals and slow convergence. RLS, on the other hand, provides faster convergence but it depends on the implicit or explicit computation of the inverse of the input signal's autocorrelation matrix. This not only implies a higher computational cost, but it can also lead to instability problems (Mueller, 1981). Other gradient search-based training algorithms also suffer from the so-called local minima problem, i.e., the optimization search may stop at a local minimum of the cost function in the weight space if the initial values are arbitrarily chosen. For example, the cost function has a fixed structure in the weight space after the expression of the cost function is chosen. The parameter update law is only a means to search for the global minimum and is independent of the cost function in the weight space.

Complete Chapter List

Search this Book:
Reset