Face Recognition Using RLDA Method Based on Mutated Cuckoo Search Algorithm to Extract Optimal Features

Face Recognition Using RLDA Method Based on Mutated Cuckoo Search Algorithm to Extract Optimal Features

Souheila Benkhaira, Abdesslem Layeb
Copyright: © 2020 |Pages: 16
DOI: 10.4018/IJAMC.2020040106
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Regularized-LDA (R-LDA) is one of the most successful holistic approaches that is introduced to overcome the “small sample size” (SSS) problem of the LDA method, which is often encountered in Face Recognition (FR) tasks. R-LDA is based on reducing the high variance of principal components of the within-class scatter matrix to optimize the regularized Fisher's criterion. In this article, the authors assume that some of these components do not have significant information and they can be discarded. To this end, the authors propose CS-RLDA that uses a Cuckoo search (CS) algorithm to select the optimal eigenvectors from a within-class matrix. However, the CS algorithm has a slow convergence speed. To deal with this problem, and to create more diversity and better trade-off between exploitation and exploration around the best solutions, the authors have modified the basic cuckoo algorithm by using a mutation operator. The experimental results performed on the ORL and UMIST databases indicate that the proposed method enhances the performance of FR.
Article Preview
Top

Introduction

Automatic Face Recognition (FR) has become one of the most interesting research field that aims essentially to insure security surveillance, telecommunication and human-computer intelligent interaction. Face characteristics of universality, non-intrusively, high social acceptability besides of development of capture devices made the face one of the most widely used biometric technology. Numerous FR techniques have been proposed. They could be classified into two main categories: the geometric based methods and the holistic-based methods. Geometric-based methods extract local features such as eyes, nose, and mouth locations. This category includes local binary patterns (LBP) (Garg & Rajput, 2014), Elastic bunch graph matching (EBGM) (Wiskott, Fellous, Krüger, & Von Der Malsburg, 1997, September), Feature extraction by Gabor filter (Lee, 1996), etc. Holistic-based approaches, the most promising approaches, extract a holistic characteristic of the entire face region. The basic idea of this category consists to transform facial data from high dimensional space to the lower dimensional subspace called feature space in which the researchers apply the classification. This category is itself divided into linear and non-linear approaches. The common linear reduction techniques used are principal component analysis (PCA) (Turk & Pentland, 1991), linear discriminant analysis (LDA) (Belhumeur, Hespanha, & Kriegman, 1997; Lu, Plataniotis, & Venetsanopoulos, 2003), independent component analysis (ICA) (Draper, Baek, Bartlett, & Beveridge, 2003) etc. The non-linear techniques including Exponential discriminant analysis (EDA) (Zhang, Fang, Tang, Shang, & Xu, 2010), Laplacian eigen maps (He, Yan, Hu, Niyogi, & Zhang, 2005; Raducanu & Dornaika, 2010, May), Diffusions maps (Hagen, Smith, Banasuk, Coifman, & Mezie, 2007, December) etc. It should be noted that PCA and LDA are the two popular linear reduction techniques used in face recognition.

Turk and Pentland proposed Eigenface method. They used PCA to reduce the dimension into a face subspace by maximizing the variance across all samples. Then, the images are linearly projected onto this subspace (Belhumeur et al., 1997; Turk & Pentland, 1991). Consequently, LDA method for face classification was developed. LDA based algorithms outperform PCA (Belhumeur et al., 1997). This method focuses on maximizing the Fisher’s criterion, which is the ratio between the between-class scatter matrix and within-class scatter matrix. So unlike PCA technique, LDA maximize the discriminatory features between classes of samples. Nevertheless, the LDA seriously suffers from the so-called “small sample size” (SSS) problem (Chen, Liao, Ko, Lin, & Yu, 2000; Lu et al., 2003) where the number of available training samples is smaller than the dimensionality of the samples (Raudys & Jain, 1991). To overcome the SSS problem, different methods have been proposed. The traditional one is Fisherface (Belhumeur et al., 1997; Zhang & Ruan, 2010), that use PCA as a preprocessing step to reduce the dimension of the within-class scatter matrix by eliminating its null principal components, and then apply the LDA in the retain PCA subspace (Belhumeur et al., 1997). However, the discarded null principal components may contain significant discriminatory information. Researchers (Chen et al., 2000; Yu & Yang, 2001) proposed direct-LDA (D-LDA) method. In which data are performed directly in the original high-dimensional input space. However, D-LDA fails to deliver good performance in case of insufficient training samples. In fact, computing the very small eigenvalues of the within-class scatter matrix give a high variance (Lu, Plataniotis, & Venetsanopoulos, 2005). To overcome this problem, researchers proposed Regularized-LDA (R-LDA) (Lu et al., 2005). They include a regularization parameter (IJAMC.2020040106.m01) which is used to increase the smaller eigenvalues. This regularization parameter has also the effect of stabilizing the smaller eigenvalues.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing