Pose and Illumination Invariance with Compound Image Transforms

Pose and Illumination Invariance with Compound Image Transforms

Lior Shamir (National Institute on Aging, USA) and Lior Shamir (NIA/NIH, USA)
Copyright: © 2011 |Pages: 15
DOI: 10.4018/978-1-61520-991-0.ch016
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

While current face recognition algorithms have provided convincing performance on frontal face poses, recognition is far less effective when the pose and illumination conditions vary. Here the authors show how compound image transforms can be used for face recognition in various poses and illumination conditions. The method works by first dividing each image into four equal-sized tiles. Then, image features are extracted from the face images, transforms of the images, and transforms of transforms of the images. Finally, each image feature is assigned with a Fisher score, and test images are classified by using a simple Weighted Nearest Neighbor rule such that the Fisher scores are used as weights. Experimental results using the full color FERET dataset show that with no parameter tuning, the accuracy of rank-10 recognition for frontal, quarter-profile, and half-profile images is ~98%, ~94% and ~91%, respectively. The proposed method also achieves perfect accuracy on several other face recognition datasets such as Yale B, ORL and JAFFE. An important feature of this method is that the recognition accuracy improves as the number of subjects in the dataset gets larger.
Chapter Preview
Top

One of the common approaches to face recognition under pose variations is correcting for the pose before applying a face recognition method. Beymer (1993) applied a pose estimation step before geometrically aligning the probe images to gallery images, and reported good results on a dataset with minimal pose variations. Vetter and Poggio (1997) rotated a single face image to synthesize a view at a given angle, an approach used by Lando and Edelman (1995) and Georghiades, Belhumeur, and Kriegman (2001) to perform face recognition in different poses. Pentland et al. (1994) extended the eigenface method (Turk & Pentland, 1991) to handle different views. Cottes, Wheeler, Walker, and Taylor (2002) trained separate models for several different poses, and used heuristics to select the model used for a given probe image.

Complete Chapter List

Search this Book:
Reset