Discriminative Moment Feature Descriptors for Face Recognition

Discriminative Moment Feature Descriptors for Face Recognition

Geetika Singh, Indu Chhabra
Copyright: © 2015 |Pages: 17
DOI: 10.4018/IJCVIP.2015070105
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Zernike Moment (ZM) is a promising technique to extract invariant features for face recognition. It has been modified in previous studies to Discriminative ZM (DZM), which selects most discriminative features to perform recognition, and shows improved results. The present paper proposes modification of DZM, named Modified DZM (MDZM), which selects coefficients based on their discriminative ability by considering extent of variability between their class-averages. This reduces within-class variations while maintaining between-class differences. The study also investigates this idea of feature selection on recently introduced Polar Complex Exponential Transform (PCET) (named discriminative or DPCET). Performance of the techniques is evaluated on ORL, Yale and FERET databases against pose, illumination, expression and noise variations. Accuracy improves up to 3.1% by MDZM at reduced dimensions over ZM and DZM. DPCET shows 1.9% of further improvement at less computational complexity. Performance is also tested on LFW database and compared with many other state-of-art approaches.
Article Preview
Top

Introduction

Success of any face recognition system depends extensively on the discriminative competence of features extracted to represent facial images. In this regard, several approaches have been reported in literature which can be categorized into structural and statistical methods. Structural techniques emphasize on individual face features such as eyes, nose and mouth or on facial distances (Brunelli & Poggio, 1993; Chellappa & Malsburg, 1992; Cox, Ghosn, & Yiaios, 1996; Kanade, 1973; Lades et al., 1993; Manjunath et al., 1992). Statistical approaches focus on the statistical distribution of the pixels and include methods such as those based on subspace (Bartlett, Movellan, & Sejnowski, 2002; Belhumeur, Hespanha, & Kriegman, 1996; Liu, Huang, Lu, & Ma, 2002; Martin, 2006; Turk & Pentland, 1991), histograms (Ahonen, Deniz, Bueno, Salido, & Torre, 2011; Hadid & Pietikainen, 2004), filters (Bhuiyan & Liu; 2007; Struc, Gajsek, & Pavešić, 2009), transforms (Hafed & Levine, 2001; Spies & Ricketts, 2000) and moments (Arnold, Madasu, Boles, & Yarlagadda, 2007; Foon, Pang, Jin, & Ling, 2003; Haddadnia, Faez, & Ahmadi, 2003; Pang, Teoh, & Ngo, 2006; Rani, 2012; Saradha & Annadurai, 2005; Singh, Mittal, & Walia, 2011; Singh, Walia, & Mittal, 2011, 2012). Feature extraction techniques generally follow two approaches for invariant face representation. In the first one, the actual images having noise factors of illumination or pose are corrected to make the standard images, followed by extraction of features. In the second approach, features invariant to these factors are extracted directly from the images. Moments-based methods, which are centred on the latter approach, have been extensively explored for face recognition in earlier studies owing to their invariance and efficient image reconstruction abilities. The magnitudes of these moments extracted at some order are used as invariant image descriptors. These methods possess minimum information redundancy, are robust to noise, invariant to rotation and can be made translation and scale invariant through proper normalization. Zernike moment (ZM) is considered the most successful among them, with high efficacy and promising results.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 2 Issues (2016)
Volume 5: 2 Issues (2015)
Volume 4: 2 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing