Improving Gender Classification Using an Extended Set of Local Binary Patterns

Improving Gender Classification Using an Extended Set of Local Binary Patterns

Abbas Roayaei Ardakany, Mircea Nicolescu, Monica Nicolescu
DOI: 10.4018/ijmdem.2014070103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this article, the authors designed and implemented an efficient gender recognition system with high classification accuracy. In this regard, they proposed a novel local binary descriptor capable of extracting more informative and discriminative local features for the purpose of gender classification. Traditional Local binary patterns include information about the relationship between a central pixel value and those of its neighboring pixels in a very compact manner. In the proposed method the authors incorporate into the descriptor more information from the neighborhood by using extra patterns. They have evaluated their approach on the standard FERET and CAS-PEAL databases and the experiments show that the proposed approach offers superior results compared to techniques using state-of-the-art descriptors such as LBP, LDP and HoG. The results demonstrate the effectiveness and robustness of the proposed system with 98.33% classification accuracy.
Article Preview
Top

Introduction

The human face is an important biometric feature, and being able to automatically recognize or classify faces is a challenging task in the object recognition research area. Successfully performing this task enables many applications in human computer interaction, psychology, and security (Mäkinen & Raisamo, 2008). Prior research has shown that it is possible to obtain information on ethnicity, identity, age, gender, and expression from face images (B. Wu, Ai, & Huang, 2003b). This paper investigates a new approach that helps in gender classification from face images.

Gender plays a significant role in our interactions in society and with computer systems (J. Wu, Smith, & Hancock, 2010). Gender classification is the binary classification problem of deciding whether a given face image contains a picture of a man or of a woman. Identifying gender from face images has received much attention recently due to its applications in improving search engine retrieval accuracy, demographic data collection, and human-computer interfaces (adjusting the software behavior with respect to the user's gender) (Alexandre, 2010). Furthermore, gender classification can be used as a preprocessing step for face recognition since it could halve the number of face candidates, assuming an equal numbers of images from each gender before the recognition. Such preprocessing can sometimes double the speed of face recognition systems (Alexandre, 2010; Mäkinen & Raisamo, 2008).

Similar to other image classification tasks, relevant features must be extracted first and then a classifier is applied. With regard to feature extraction, there are several types of methods. A very basic approach is to simply use gray-scale or color pixel vectors as features (Moghaddam & Yang, 2000). Many methods use approaches such as Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Linear Discriminant Analysis (LDA) which transform images into a lower dimensional space (Balci & Atalay, 2002). The main drawback of this group of methods is that they are sensitive to variations irrelevant to gender such as face orientation and cannot tolerate large changes in this respect (H. C. Lian & Lu, 2006). Third, texture information such as wrinkles and complexion has been used (Iga, Izumi, Hayashi, Fukano, & Ohtani, 2003). Alexandre et al. (Alexandre, 2010) combined local binary patterns (LBP) with intensity and shape features (histogram of edge directions) in a multi-scale fusion approach, while Ylioinas et al. (Ylioinas, Hadid, & Pietikäinen, 2011) combined LBP with contrast information. Shan (Shan, 2012) used AdaBoost to learn discriminative LBP histogram bins. Finally, it is possible to extract explicit facial features for classification such as through the analysis of facial wrinkles and other shapes (H. C. Lian & Lu, 2006). This is performed using a combination of facial feature detection with wavelet transforms (Hosoi, Takikawa, & Kawade, 2004; H.-C. Lian, Lu, Takikawa, & Hosoi, 2005).

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing