Hybrid Biometrics and Watermarking Authentication

Hybrid Biometrics and Watermarking Authentication

Kareem Kamal A. Ghany (Beni-Suef University, Egypt) and Hossam M. Zawbaa (Babes-Bolyai University, Romania)
Copyright: © 2017 |Pages: 25
DOI: 10.4018/978-1-5225-1703-0.ch003
OnDemand PDF Download:
No Current Special Offers


There are many tools and techniques that can support management in the information security field. In order to deal with any kind of security, authentication plays an important role. In biometrics, a human being needs to be identified based on some unique personal characteristics and parameters. In this book chapter, the researchers will present an automatic Face Recognition and Authentication Methodology (FRAM). The most significant contribution of this work is using three face recognition methods; the Eigenface, the Fisherface, and color histogram quantization. Finally, the researchers proposed a hybrid approach which is based on a DNA encoding process and embedding the resulting data into a face image using the discrete wavelet transform. In the reverse process, the researchers performed DNA decoding based on the data extracted from the face image.
Chapter Preview

The Proposed Face Recognition And Authentication System

The proposed Face Recognition and Authentication System are composed of three main phases; pre-processing, feature extraction, and classification and authentication phases. Figure 1 describes the structure of the Face Recognition and Authentication System.

Pre-Processing Phase

By means of early vision techniques, face images are normalized and enhanced to improve the recognition performance of the system. The following pre-processing steps can be implemented in a face recognition system:

  • Image Size Normalization: Because the Principal Components Analysis (PCA) and the Linear Discriminate Analysis (LDA) involve multiplication of arrays, it is important to normalize the size of all images. This is done by resizing all images to a default image size such as 112 x 92 pixels as in the ORL database the researchers used in this work to guarantee that information about the eyes, nose, and mouth is not lost in potentially small versions of images.

  • Illumination Normalization: The general purpose of illumination normalization (Huang et al., 2008) is to decrease lighting effect when the observed images are captured in different lighting environments. A common approach is to adjust observed images to approximate the ones captured under a standard lighting condition.

  • Histogram Equalization: Histogram equalization (Histogram et al., 2005) is a process of adjusting the image so that each intensity level contains an equal number of pixels such that the appearance of the image is improved by balancing light and dark areas. Histogram equalization (HE) (Histogram et al., 2005) can be used as a simple but very robust way to obtain light correction when applied to small regions such as faces. HE is to maximize the contrast of an input image, resulting in a histogram of the output image which is as close to a uniform histogram as possible. However, this does not remove the effect of a strong light source but maximizes the entropy of an image, thus reducing the effect of differences in illumination within the same “setup” of light sources. By doing so, HE makes facial recognition a somehow simpler task.

Two examples of HE of images can be seen in Figure 2.

Figure 1.

The face recognition and authentication system: general structure

Figure 2.

Histogram equalization


Complete Chapter List

Search this Book: