Article Preview
Top1. Introduction
In recent years, face recognition has received a great deal of attention and become an active research topic in the fields of computer vision, image processing, pattern recognition, and machine learning. Face recognition has spread in several applications such as biometrics systems, access control and information security systems, surveillance systems, content-based video retrieval systems, credit-card verification systems, and more generally image understanding. The face recognition system is a computer system and useful application of (digital) image or video frame analysis for automatic identification or verification of the identity of a person. Figure 1 shows the framework of face recognition system. It generally involves two stages:
- •
Face Detection: Where a photo is searched to find any face
- •
Face Recognition: Where the detected face is compared to a database of known faces, to decide who that person is.
Figure 1. Face recognition system framework
The key of each face recognition system is the utilized feature extraction technique that must be able to extract features from the face image. These features are distinct and stable under different conditions during the image acquisition process, like illumination variation, random noise, and alignment error which have negative influences on the detection and recognition accuracy (Turk & Pentland, 1991). In the context of feature description and representation applications, there are two common types of techniques:
One of the initial subspace based methods that was applied in face recognition is principal component analysis (PCA) which is particularly known as eigenface (Turk & Pentland, 1991). Recently, some other global features like independent component analysis (ICA) (Yuen & Lai, 2002), gradientface (Zhang et al., 2009), etc. showed promising results in image representation for face detection and recognition. However, all of these representations suffer during illumination variation and alignment error.
The most successful local appearance based feature extraction algorithms group is based on the concept of spatial histogram model local pattern descriptors that includes local binary pattern (Ojala et al., 1996), local directional pattern (Jabid et al., 2010), and local directional number pattern (Rivera et al., 2013). These descriptors usually have been used in the field of face recognition and facial expression recognition, since local pattern descriptors have quite important properties to be robust against uncontrolled environments such as illumination variation, random noise, and alignment error, as well as computational simplicity. The main goal of the local pattern descriptors is for extracting the image features that are distinct and stable under different conditions during the image acquisition process.
The original local binary pattern (LBP) operator was introduced by Ojala et al. (1996) for texture analysis, and has proved a simple yet powerful approach to describe local structures. It has been extensively used in many applications such as face image analysis (Ahonen et al., 2004; Hadid et al., 2004), image and video retrieval (Huijsmans & Sebe, 2003; Grangier & Bengio, 2008), and biomedical and aerial image analysis (Oliver et al., 2007; Kluckner et al., 2007). LBP has been exploited for facial representation in different tasks, which include face detection (Jin et al., 2004; Zhang et al., 2007; Zhang & Zhao, 2004), face recognition (Chan et al., 2007; Li et al., 2005; Zhao et al., 2005), facial expression analysis, and demographic (gender, race, age, etc.) classification (Feng et al., 2004; Yang & Ai, 2007; Shan et al., 2009). The development of LBP methodology can be well illustrated in facial image analysis, and most of its recent variations are proposed in this area (Heikkia et al., 2006).