Advances in Moving Face Recognition

Advances in Moving Face Recognition

Hui Fang (Swansea University, UK), Nicolas Costen (Manchester Metropolitan University, UK), Phil Grant (Swansea University, UK) and Min Chen (Swansea University, UK)
DOI: 10.4018/978-1-60960-477-6.ch010

Abstract

This chapter describes the approaches to extracting features via the motion subspace for improving face recognition from moving face sequences. Although the identity subspace analysis has achieved reasonable recognition performance in static face images, more recently there has been an interest in motion-based face recognition. This chapter reviews several state-of-the-art techniques to exploit the motion information for recognition and investigates the permuted distinctive motion similarity in the motion subspace. The motion features extracted from the motion subspaces are used to test the performance based on a verification experimental framework. Through experimental tests, the results show that the correlations between motion eigen-patterns significantly improve the performance of recognition.
Chapter Preview
Top

Introduction

In the last couple of decades, automatic computerised face recognition techniques have been developed for the security of accessing confidential information in both the internet virtual world and the real world. Recognition systems are required to achieve high performance in a variety of different environments such as wide camera angles, different illuminations and various expressions.

Identity subspace learning techniques have improved significantly which can be traced back to Eigen-faces (Turk, 1991) designed for face recognition. Linear Discriminate Analysis (LDA) (Belhumeur, 1997), Active Appearance Models (AAM) (Cootes, 2001), Independent Component Analysis (ICA) (Bartlett, 2002) and other subspace methods, including kernel-based techniques (Torrs, 2002), have been proposed to provide various face manifolds to accurately describe the range of possible facial characteristics. In addition, a number of algorithms (Plataniotis, 2003; Costen, 2002) have been developed to alleviate the major forms of extraneous variation in facial modeling, which can be summarized as PIE (pose, illumination and expression).

Although identity subspace techniques have achieved some good recognition results, motion-based face recognition is expected to improve the performance due to more information which the face sequences contain. These advanced techniques could be applied to real-world problems. A number of commercial security-based systems have been developed and used in highly confidential environments, based on techniques for face alignment and recognition. These usually require a range of constraint conditions such as constant illumination and near frontal view. These systems are liable to attacks by malicious individuals who, for example, make facial masks which resemble genuine users or otherwise simulate parameters based on a forged face model similar to the system model. With new subspace learning algorithms, largely relying upon recently developed automatic means of tracking facial distortion within sequences, the effects of these problems can be reduced by analyzing the advanced motion models.

In psychological studies, dynamic information has also been shown to make contributions to improve recognition. Lander et al. (Lander, 2005) in particular show a significant beneficial effect of non-rigid movements. It is concluded that some familiar faces have characteristic motion patterns which act as an additional clue to recognition. O'toole et al. (O’toole, 2002) also suggest the motion features called dynamic signatures can help in identification. From the psychological experimental results, we can assume that motion features help to improve the computerized face recognition.

In (O’toole, 2002), it is mentioned that facial motion modeling provides at least three factors which can be used to improve recognition; largely derived from human psychological studies, which has been a fertile area of research in recent years. The first centres on the construction of a 3D structured face representation from temporally tracked 2D face movements. The second involves the robust calculation of the mean 2D appearance of the face from multiple frames. The third aspect investigates the distinctive movements shown by an individual face by encoding the variation in appearance within a single sequence. The results of extensive experiments show that the recognition performance is significantly improved by the inclusion of these factors.

In order to simulate a similar process indicated by the human experiments, two kinds of frameworks are widely explored for achieving the best motion effects. Most algorithms, such as (Zhou, 2003), use probabilistic approaches to improve recognition. This kind of motion information is called as robust confirmation by multiple frames in psychological research. Some other algorithms (Yamaguchi, 1998; Arandjelovi´c, 2006; Edwards, 1999) correlate the statistical distribution of the face sequences corresponding with psychological dynamic signatures.

In this chapter, to show the advantages of applying face motion, we investigate using permutated distinctive motion similarity in the motion subspace for improving face recognition as the counterpart to the psychological work by Lander et al. (Lander, 2005). The characteristic motion distributions are extracted by permuting the eigenvectors derived from the concatenation of the parameters, encoded by a statistical model followed by correlating the pairs of gallery and probe sequences. It is then possible to combine this motion feature efficiently with the identity feature to achieve a better recognition performance compared with only using the identity feature. Although AAM+LDA subspace is more suitable for encoding the identity feature, the motion similarity extracted solely from the geometric model is robust to capture the individual dynamic signature.

Complete Chapter List

Search this Book:
Reset