A Review of Facial Feature Detection Algorithms

A Review of Facial Feature Detection Algorithms

Stylianos Asteriadis, Stylianos Asteriadis, Nikos Nikolaidis, Nikos Nikolaidis, Ioannis Pitas, Ioannis Pitas
Copyright: © 2011 |Pages: 20
DOI: 10.4018/978-1-61520-991-0.ch003
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Facial feature localization is an important task in numerous applications of face image analysis that include face recognition and verification, facial expression recognition, driver‘s alertness estimation, head pose estimation etc. Thus, the area has been a very active research field for many years and a multitude of methods appear in the literature. Depending on the targeted application, the proposed methods have different characteristics and are designed to perform in different setups. Thus, a method of general applicability seems to be away from the current state of the art. This chapter intends to offer an up-to-date literature review of facial feature detection algorithms. A review of the image databases and performance metrics that are used to benchmark these algorithms is also provided.
Chapter Preview
Top

Introduction

Face image processing and analysis is a research field that deals with the extraction and analysis of information related to human faces from sources such as still images, video and 3D data. This research area, which one can position within the general field of computer vision, has attracted the interest of a significant part of the research community during the last years. Face image analysis deals with a variety of problems that include face detection and tracking, face recognition and verification, facial expression and emotion recognition, facial features detection and tracking, eye gaze tracking, virtual face synthesis and animation, etc.

Facial feature (or landmark) detection in images and videos deals with estimating the position of prominent features of a human face. The detected features are, in most cases, the eyes, the eyebrows, the nose and the mouth. The result of the detection can be either a list of characteristic points on those features, namely the eye corners, the center of the iris, the corners or the center of the eyebrows, the nose tip, the nostrils, the corners or the center of the mouth, or the corresponding image region defined either by its bounding box or, more accurately, by its contour (e.g. lips contour). A considerable amount of work has been published recently in this area. The increasing interest for facial feature detection methods is due to their wide range of applications within the field of face image analysis. For example, face recognition and verification methods (Lam & Yan, 1998; Karungaru, Fukumi, & Akamatsu, 2004; Campadelli & Lanzarotti, 2004) used in security or access control applications frequently involve a pre-processing step of facial features localization. The detected features can be subsequently used for the registration of the test image with the images in the face database or for the actual face recognition/verification task. Furthermore, the strong need for man-machine interaction paradigms has created a high interest for facial expression recognition, head pose and eye gaze estimation and facial feature tracking techniques (Bhuiyan, Ampornaramveth, Muto, & Ueno, 2003; Yilmaz & Shah, 2002). Such techniques often include a facial feature detection step in order to perform image registration, initialize the tracking of facial features, etc. Also, applications such as drivers' attention assessment (Smith, Shah, & Vitoria Lobo, 2003) or visual speech understanding require the detection of fiducial points on faces, whereas face detection techniques sometimes involve a facial feature detection step in order to verify that the candidate region is indeed a face.

Depending on the approach, facial feature detection methods utilize facial geometry, luminance, edge, color and shape information. In their vast majority, the published methods involve a face detection step and seek for facial features only within the detected face region. These methods give more robust results than the second category of methods that do not involve face detection and search for facial features over the entire image. In the first group of approaches, no scaling problems exist, as these are solved at the face detection step. On the contrary, methods that belong to the second category must make an initial estimation of the expected dimensions of the sought facial features. As a consequence, their use is limited to applications where the distance between the camera and the face is approximately known. Such applications include driver's attention determination (Smith et al., 2003), face recognition in controlled image acquisition setups, etc.

Complete Chapter List

Search this Book:
Reset