Photometric Normalization Techniques for Illumination Invariance

Photometric Normalization Techniques for Illumination Invariance

Vitomir Štruc (University of Ljubljana, Slovenia), Vitomir Štruc (University of Ljubljana, Slovenia), Nikola Pavešic (University of Ljubljana, Slovenia) and Nikola Pavešic (University of Ljubljana, Slovenia)
Copyright: © 2011 |Pages: 22
DOI: 10.4018/978-1-61520-991-0.ch015
OnDemand PDF Download:
$37.50

Abstract

Face recognition technology has come a long way since its beginnings in the previous century. Due to its countless application possibilities, it has attracted the interest of research groups from universities and companies around the world. Thanks to this enormous research effort, the recognition rates achievable with the state-of-the-art face recognition technology are steadily growing, even though some issues still pose major challenges to the technology. Amongst these challenges, coping with illumination-induced appearance variations is one of the biggest and still not satisfactorily solved. A number of techniques have been proposed in the literature to cope with the impact of illumination ranging from simple image enhancement techniques, such as histogram equalization, to more elaborate methods, such as anisotropic smoothing or the logarithmic total variation model. This chapter presents an overview of the most popular and efficient normalization techniques that try to solve the illumination variation problem at the preprocessing level. It assesses the techniques on the YaleB and XM2VTS databases and explores their strengths and weaknesses from the theoretical and implementation point of view.
Chapter Preview
Top

Introduction

Current face recognition technology has evolved to the point where its performance allows for its deployment in a wide variety of applications. These applications typically ensure controlled conditions for the acquisition of facial images and, hence, minimize the variability in the appearance of different (facial) images of a given subject. Commonly controlled external factors in the image capturing process include ambient illumination, camera distance, pose and facial expression, etc.

In these controlled conditions, state-of-the-art face recognition systems are capable of achieving the performance level which can match that of the more established biometric modalities, such as fingerprints, as shown in a recent survey (Gross et al., 2004; Phillips et al., 2007). However, the majority of the existing face recognition techniques employed in these systems deteriorate in their performance when employed in uncontrolled environments. Appearance variations caused by pose-, expression-and most of all illumination-changes pose challenging problems even to the most advanced face recognition approaches. In fact, it was empirically shown that the illumination-induced variability in facial images is often larger than the variability induced by the subject’s identity (Adini et al., 1997), or, to put it differently, images of different faces appear more similar than images of the same face captured under severe illumination variations.

Due to this susceptibility to illumination variations of the existing face recognition techniques, numerous approaches to achieve illumination invariant face recognition have been proposed in the literature. As identified in a number of surveys (Heusch et al., 2005; Chen, W., et al., 2006; Zou et al., 2007), three main research directions have emerged with respect to this issue over the past decades. These directions tackle the problem of illumination variations at either:

  • The pre-processing level

  • The feature extraction level

  • The modeling and/or classification level

When trying to achieve illumination invariant face recognition at the pre-processing level, the employed normalization techniques aim at rendering facial images in such a way that the processed images are free of illumination induced facial variations. Clearly, these approaches can be adopted for use with any face recognition technique, as they make no presumptions that could influence the choice of the feature extraction or classification procedures.

Approaches from the second group try to achieve illumination invariance by finding face representations that are stable under different illumination conditions. However, as different studies have shown, there are no representations which would ensure illumination invariance in the presence of severe illumination changes, even though some representations, such as edge maps (Gao & Leung, 2002), local binary patterns (Marcel et al., 2007) or Gabor wavelet based features (Štruc & Pavešić, 2009), are less sensitive to the influence of illumination. The inappropriateness of the feature extraction stage for compensating for the illumination variations was also formally proven in (Chen et al., 2000).

The last research direction with respect to illumination invariance focuses on the modeling or classification level. Here, assumptions regarding the effects of illumination on the face model or classification procedure are made first and then based on these assumptions counter measures are taken to obtain illumination invariant face models or illumination insensitive classification procedures. Examples of these techniques include the illumination cones technique (Georghiades et al., 2001), the spherical harmonics approach (Basri & Jacobs, 2003) and others. While these techniques are amongst the most efficient ways of achieving illumination invariant face recognition, they usually require a large training set of facial images acquired under a number of lighting conditions and are also computationally expensive.

Complete Chapter List

Search this Book:
Reset