Illumination, Pose and Occlusion Invariant Face Recognition from Range Images Using ERFI Model

Illumination, Pose and Occlusion Invariant Face Recognition from Range Images Using ERFI Model

Suranjan Ganguly, Debotosh Bhattacharjee, Mita Nasipuri
Copyright: © 2015 |Pages: 20
DOI: 10.4018/ijsda.2015040101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this paper the pivotal contribution of the authors is to recognize the 3D face images from range images in the unconstrained environment i.e. under varying illumination, pose as well as occlusion that are considered to be the most challenging task in the domain of face recognition. During this investigation, face images have been normalized in terms of pose registration as well as occlusion restoration using ERFI (Energy Range Face Image) model. 3D face images are inherently illumination invariant due its point-based representation of data along three axes. Here, other than quantitative analysis, a subjective analysis is also carried out. However, synthesized datasets have been accomplished to investigate the performance of recognition rate from Frav3D and Bosphorus databases using SIFT and SURF like features. Moreover, weighted fusion of these individual feature sets is also done. Later these feature sets have been classified by K-NN and Sequence Matching Technique and achieved maximum recognition rates of 99.17% and 98.81% for Frav3D and GavabDB databases respectively.
Article Preview
Top

1. Introduction

Face recognition, an important biometric (Nandi et al., 2014) modality as well as a challenging task in the domain of computer vision. Although there exists numerous biometric data, such as finger print, iris, DNA, palm print, ear shape, heart rate, etc. human face image has gained much of the researchers attention, due to its uniqueness, easy availability and acquisition without consent of the individual etc. In last decades, in this area enormous advancements have already been made. Moreover, due to the advancements in sensing technology (i.e. acquisition mechanism) and availability of sufficient computing power, 3D face image based recognition techniques have also gained (Scheenstra et al., 2005; Ganguly et al., 2015b) much of the researchers' attention. The performance of any face recognition system mainly suffers for three main reasons, namely (1) Pose (2) Illumination and (3) Occlusion. However, the illumination (or light shading) problem can be efficiently handled by 3D face images. Due to its inherent property, 3D images preserve the face data along three axes (X, Y, and Z) in three sets of pointclouds, i.e. depth data (Z) in X-Y plane. Unlike 2D, 3D images are not affected by the illumination of face image with different light sources (or shading). Now, even in the set of frontal face images the presence of occluded faces will certainly degrade the performance of any well-established recognition algorithm. The pose variations are somehow creating a similar type of situation as that of occluded faces. Due to different pose variations along yaw, pitch and roll, some portion of the face region is suppressed that ultimately cause poor recognition rate. To deal with these challenges i.e. pose and occlusion, authors have detailed various investigations in this literature. An input 3D face image is processed to create its corresponding depth map (Ganguly et al., 2014a; Conde et al. 2006) or 2.5D range face image that preserves only the depth values. Then, ERFI (Ganguly et al., 2014b) model is used to register the rotated faces to frontal (or near frontal) position. In case of occluded faces, various techniques, such as: GPCA, ERFI model, and eigenface images are accomplished to restore back the missing part of face image after successful reconstruction of occluded region(s). After that, a synthesized face dataset is created consisting of frontal range face images only those are neutral, having facial actions i.e. expression, registered and restored. These are used to recognize by K-NN and Sequence Matching Technique from SIFT (Lenc et al., 2013) and SURF based feature extraction mechanism. Moreover, a weighted fusion mechanism is followed to create a new fused (or hybrid) feature vector for aiming higher recognition rate. Other than validating the algorithm on synthesized dataset, the recognition algorithm has been examined in different sub-groups from two databases (like: either occluded or illuminated, rotated and frontal with various expressions) along with range face images from original databases to discover the superiority of the performance in terms of recognition result. In Figure 1, the overall description of the proposed recognition scheme has been illustrated.

Figure 1.

Proposed framework of robust face recognition mechanism

ijsda.2015040101.f01

Hence, in this paper the contribution of the authors can be summarized as follows:

Complete Article List

Search this Journal:
Reset
Volume 12: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 11: 5 Issues (2022)
Volume 10: 4 Issues (2021)
Volume 9: 4 Issues (2020)
Volume 8: 4 Issues (2019)
Volume 7: 4 Issues (2018)
Volume 6: 4 Issues (2017)
Volume 5: 4 Issues (2016)
Volume 4: 4 Issues (2015)
Volume 3: 4 Issues (2014)
Volume 2: 4 Issues (2013)
Volume 1: 4 Issues (2012)
View Complete Journal Contents Listing