Analysis of the Performance of Eigenfaces Technique in Recognizing Non-Caucasian Faces

Analysis of the Performance of Eigenfaces Technique in Recognizing Non-Caucasian Faces

Imran Khan (Liverpool John Moores University, Liverpool, UK) and Sud Sudirman (Department of Information, Media and Computer Entertainment, Liverpool John Moores University, Liverpool, UK)
Copyright: © 2012 |Pages: 14
DOI: 10.4018/ijcvip.2012100104
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Facial detection and recognition technologies are rapidly becoming an important area in many computer systems ranging from system security and biometric authentication to online social networks. However, despite of many years of research, a perfect solution to facial detection and recognition has yet not been found. As one of the earliest techniques, Eigenfaces had become one of the most popular benchmarks in this field. The technique itself, though far from providing a perfect solution, had been used by researchers to compare their proposed algorithms. The authors’ observation of the literature on and surrounding the area of facial detection and recognition found that there is a severe lack of tests and comparison of these techniques on non-Caucasian facial images. This paper aims to provide some lights into this vacuity and to assess the performance of the benchmark technique using non-Caucasian face databases
Article Preview

Introduction

The face is the primary focus of attention in social intercourse (Turk & Pentland, 1991). Humans can recognise thousands of faces during their lifetime. They have the ability to recognise faces even if they’ve been slightly or even greatly altered – factors such as aging, hairstyles, facial hair, glasses, and facial expressions can greatly alter a face appearance but yet it may be still recognisable to humans. This is not the case with computational models. Because of the complexity and multi-dimensional nature of faces, it is difficult for computational model to be able to account for every single permutation of faces and features on these faces. For that reason, no absolute solution exists for face detection and recognition. This is why facial detection is a key problem in the field of computer vision research.

It is also imperative that developed facial recognition methods take into account faces from all over the world. As it stands, popular facial recognition databases such as YALE, PIE and Harvard database consist heavily of Caucasian faces, yet the majority of the world’s population is actually non-Caucasian. For a universal solution to exist, people of non-Caucasian backgrounds must also be represented proportionally within tests.

Description on a number of classical face recognition algorithms are to be given for the remainder of this section.

Eigenfaces for Facial Recognition

The Eigenfaces approach, developed by Sirovich et. al. (Sirovitch & Kirby, 1987) was used by Turk et. al. in their paper (Turk & Pentland, 1991) as an approach to facial classification, detection and recognition. It is based on the Principal Component Analysis (PCA) approach to dealing with the facial recognition problem. Turk and Pentland aimed to develop a computational model of facial recognition that was fast, simple and accurate in constrained environments. They aimed to achieve this by decomposing images of faces into a small set of characteristic feature images which are called ‘Eigenfaces’. These Eigenfaces are the principal components of the initial training set of face images.

Turk and Pentland demonstrated this algorithm to calculate Eigenfaces and classify face images. They used a database of over 2500 face images that were digitized and processed under controlled conditions. They took pictures of sixteen subjects in all combinations of three head orientations, three scales of head size and three different lighting conditions. The training set was set up by selecting an assortment of images of individuals from the database, but they specifically used a training set that contained one image of each individual. These independent variables (head orientation, scale of head and illumination of scene) were all tested in different permutations against the training data set. Turk and Pentland found that, where every face image is classified as known (where the threshold was set to infinite), the algorithm produced approximately an average of 96% correct classification across all of the lighting variants, 85% correct over all of the orientation variants and 64% correct over all of the size variations.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 7: 4 Issues (2017)
Volume 6: 2 Issues (2016)
Volume 5: 2 Issues (2015)
Volume 4: 2 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing