Face and Eye Detection

Face and Eye Detection

Daijin Kim (Pohang University of Science & Technology, Korea) and Jaewon Sung (LG Electronics, Korea)
Copyright: © 2009 |Pages: 40
DOI: 10.4018/978-1-60566-216-9.ch002


Face detection is the most fundamental step for the research on image-based automated face analysis such as face tracking, face recognition, face authentication, facial expression recognition and facial gesture recognition. When a novel face image is given we must know where the face is located, and how large the scale is to limit our concern to the face patch in the image and normalize the scale and orientation of the face patch. Usually, the face detection results are not stable; the scale of the detected face rectangle can be larger or smaller than that of the real face in the image. Therefore, many researchers use eye detectors to obtain stable normalized face images. Because the eyes have salient patterns in the human face image, they can be located stably and used for face image normalization. The eye detection becomes more important when we want to apply model-based face image analysis approaches.
Chapter Preview


The AdaBoost is a kind of adaptive boosting method. It finds a set of optimal weak classifiers that are made of simple rectanglular filters. The weak classifier can be represented as

(1) where and are a simple rectangular filter, a threshold and a parity, respectively. The learning steps of the adaBoost can be summarized as follows (Viola and Jones, 2001; Freund and Schapire, 1995).
  • 1.

    Prepare two sets of training images, where the two sets consists of object data and non-object data, respectively.

  • 2.

    Set the weights of all the training images uniformly and set iteration index .

  • 3.

    Compute the error rate of each classifier using the training images as


where and are the index of the training images and the index of weak classifiers, respectively.
  • 4.

    Select a weak classifier that has the lowest error rate.

  • 5.

    Update the weights of the training images as


where and we set as

  • 6.

    Normalize the weights so that the sum of them become 1.

  • 7.

    Check the iteration index .

If () Set and go to step 3.

  • 8.

    Compute the final strong classifier value as


where .

In order to compute the rectangular filter rapidly, we define the integral image (Crow, 1984). The integral image at a location is defined by the sum of the pixels above and up to the left of as

(6) where is the integral image at a location and is the pixel value of original image. The integral image can be the following iterative manner as
(7) where is the cumulative row sum and initially , . Figure 1 shows how to compute the rectangular filter using the integral image. The sum of region D can be compute by a simple computation as .

Complete Chapter List

Search this Book: