A Robust Facial Feature Tracking Method Based on Optical Flow and Prior Measurement

A Robust Facial Feature Tracking Method Based on Optical Flow and Prior Measurement

Guoyin Wang, Yong Yang, Kun He
DOI: 10.4018/978-1-4666-1743-8.ch017
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Cognitive informatics (CI) is a research area including some interdisciplinary topics. Visual tracking is not only an important topic in CI, but also a hot topic in computer vision and facial expression recognition. In this paper, a novel and robust facial feature tracking method is proposed, in which Kanade-Lucas-Tomasi (KLT) optical flow is taken as basis. The prior method of measurement consisting of pupils detecting features restriction and errors and is used to improve the predictions. Simulation experiment results show that the proposed method is superior to the traditional optical flow tracking. Furthermore, the proposed method is used in a real time emotion recognition system and good recognition result is achieved.
Chapter Preview
Top

Introduction

Cognitive Informatics (CI) is the transdisciplinary study into the internal information processing mechanisms and processes of the Natural Intelligence (NI) – human brains and minds – and their engineering applications in computing, ICT, and healthcare industries. It is a cutting-edge and multidisciplinary research area that tackles the fundamental problems shared by modern informatics, computation, software engineering, AI, cybernetics, cognitive science, neuropsychology, medical science, systems science, philosophy, linguistics, economics, management science, and life sciences. (Wang, 2009; Wang, 2007a; Wang & Kinsner, 2006).

As an aspect of the major cognitive processes, emotion has been studied by many researchers (Wang, 2007b; Picard, 2003). In the research works about affective and emotion, Picard proposed affective computing (AC), which handles with recognition, expressing, modeling, communicating and responding to emotion (Ahn & Picard, 2006; Picard, 2003). In the research works of AC, emotion recognition is one of the most basic and important modules in affective computing. Some progress has been achieved in emotion recognition, however, in order to recognize human emotions in real time, some foundational problems have to be dealt with, such as face detecting, feature tracking, etc.

With the increasingly development of computing technology since the last decades of 20th century, the visual tracking has become a very hot spot in computer vision (Hou & Han, 2006; Wang, Hu, & Tan; Moeslund & Granum, 2001). Visual tracking could further be classified into model-based tracking, motion-based tracking, facial feature-based tracking, neural network-based tracking and etc. Among these methods, facial feature tracking, which consists of detecting facial features, computing features' shifts, and predicting new locations, is the premise of the works, such as pattern recognition and 3D-reconstructions (Hou & Han, 2006). Meanwhile, visual tracking can be used for non-sequence images and image sequence. There are some classical facial feature tracking methods for non-sequence images, such as AAM and ASM (Cootes, Edwards & Taylor, 2001; Cootes, Taylor, Cooper, & Graham, 1995). For sequential image frames, the representative method is KLT optical flow (Lucas & Kanade, 1981; Tomasi & Kanade, 1991; Shi & Tomasi, 1994).

The KLT (Kanade-Lucas-Tomasi) algorithm was proposed for image alignment (Lucas & Kanade, 1981; Tomasi & Kanade, 1991; Shi & Tomasi, 1994). The goal of KLT algorithm is to align a template image T(x) comprising a group of points to an input image I(x). The KLT algorithm uses SSD (sum of squared intensity differences) as measurement to minimize the errors for each tracking window.

Duan et al. (2004) used KLT algorithm to track facial feature points (Duan et al., 2004). They aligned all interested points on the first image of sequences, and computed the shifts of points on the next image based on KLT algorithm.

In the research work of Yan and Su (Yan & Su, 1998), 12 feature points were chosen, at the same time, restrictions were considered according to facial statistic information. However, errors estimation wasn’t considered in their work. An average error of 2-3 pixels was resulted in their work.

Yu and Li proposed an expression recognition method based on optical flow (Yu & Li, 2005). In their work, 26 points were chosen as emotional feature points, and each point represented the center of a 13*13 window. Optical flow algorithm was used to track these feature points in each frame. A neural network-based classifier was trained for expression recognition. Characters of tracking method are not discussed. An average recognition rate of 88.38% is got in their paper.

Complete Chapter List

Search this Book:
Reset