Face and Eye Detection

Face and Eye Detection

Daijin Kim, Jaewon Sung
Copyright: © 2009 |Pages: 40
DOI: 10.4018/978-1-60566-216-9.ch002
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Face detection is the most fundamental step for the research on image-based automated face analysis such as face tracking, face recognition, face authentication, facial expression recognition and facial gesture recognition. When a novel face image is given we must know where the face is located, and how large the scale is to limit our concern to the face patch in the image and normalize the scale and orientation of the face patch. Usually, the face detection results are not stable; the scale of the detected face rectangle can be larger or smaller than that of the real face in the image. Therefore, many researchers use eye detectors to obtain stable normalized face images. Because the eyes have salient patterns in the human face image, they can be located stably and used for face image normalization. The eye detection becomes more important when we want to apply model-based face image analysis approaches.
Chapter Preview
Top

Adaboost

The AdaBoost is a kind of adaptive boosting method. It finds a set of optimal weak classifiers that are made of simple rectanglular filters. The weak classifier can be represented as

978-1-60566-216-9.ch002.m01
(1) where 978-1-60566-216-9.ch002.m02 and 978-1-60566-216-9.ch002.m03 are a simple rectangular filter, a threshold and a parity, respectively. The learning steps of the adaBoost can be summarized as follows (Viola and Jones, 2001; Freund and Schapire, 1995).
  • 1.

    Prepare two sets of training images, where the two sets consists of object data and non-object data, respectively.

  • 2.

    Set the weights of all the training images uniformly and set iteration index 978-1-60566-216-9.ch002.m04.

  • 3.

    Compute the error rate of each classifier using the training images as

    978-1-60566-216-9.ch002.m05
    (2)

where 978-1-60566-216-9.ch002.m06 and 978-1-60566-216-9.ch002.m07 are the index of the training images and the index of weak classifiers, respectively.
  • 4.

    Select a weak classifier that has the lowest error rate.

  • 5.

    Update the weights of the training images as

    978-1-60566-216-9.ch002.m08
    (3)

where 978-1-60566-216-9.ch002.m09 and we set 978-1-60566-216-9.ch002.m10 as

978-1-60566-216-9.ch002.m11
(4)
  • 6.

    Normalize the weights 978-1-60566-216-9.ch002.m12 so that the sum of them become 1.

  • 7.

    Check the iteration index 978-1-60566-216-9.ch002.m13.

If (978-1-60566-216-9.ch002.m14) Set 978-1-60566-216-9.ch002.m15 and go to step 3.

  • 8.

    Compute the final strong classifier value as

    978-1-60566-216-9.ch002.m16
    (5)

where 978-1-60566-216-9.ch002.m17.

In order to compute the rectangular filter 978-1-60566-216-9.ch002.m18 rapidly, we define the integral image (Crow, 1984). The integral image at a location 978-1-60566-216-9.ch002.m19 is defined by the sum of the pixels above and up to the left of 978-1-60566-216-9.ch002.m20 as

978-1-60566-216-9.ch002.m21
(6) where 978-1-60566-216-9.ch002.m22 is the integral image at a location 978-1-60566-216-9.ch002.m23 and 978-1-60566-216-9.ch002.m24 is the pixel value of original image. The integral image can be the following iterative manner as
978-1-60566-216-9.ch002.m25
(7) where 978-1-60566-216-9.ch002.m26 is the cumulative row sum and initially 978-1-60566-216-9.ch002.m27, 978-1-60566-216-9.ch002.m28. Figure 1 shows how to compute the rectangular filter using the integral image. The sum of region D can be compute by a simple computation as 978-1-60566-216-9.ch002.m29.

Complete Chapter List

Search this Book:
Reset