Article Preview
Top1. Introduction
With the development of our society, the demands for identify validation has been increasing rapidly. So biometrics is receiving more and more attention in recent years and it plays an important role in almost every aspect of new security measures - from control access point to terrorist identification. Ear has certain advantages over other biometrics because of is desirable such as universality, uniqueness and permanence (Iannarelli, 1989; Chang, Bowyer, Sarkar & Victor, 2003).
In recent years, discriminant subspace analysis has been extensively studied in computer vision and pattern recognition. One popular method is Linear Discriminant Analysis, also known as the Fisher Linear Discriminant (FLD). It tries to find an optimal linear transformation which maximizes the between-class scatter and minimizes the within-class scatter3,4. To be more specific, in terms of the between-class scatter matrix
and the within-class scatter matrix
, the Fisher’s Criterion can be written as
(1)By maximizing the criterion
, Fisher Linear Discriminant finds the subspaces in which the classes are most linearly separable. The solution that maximizes
is a set of the eigenvectors
which must satisfy
(2)This is called the generalized eigenvalue problem. The discriminant subspace is spanned by the generalized eigenvectors. The discriminability of each eigenvector is measured by the corresponding generalized eigenvalue, i.e., the most discriminant subspace corresponds to the maximal generalized eigenvalue. The generalized eigenvalue problem can be solved by matrix inversion and eigentransform, i.e., applying the eigentransform on
. Unfortunately, for many applications with high dimensional data and few training samples, such as ear recognition, the scatter matrix
is singular because generally the dimension of sample data is greater than the number of samples. This is known as the undersampled or small sample size problem (Sergios Theodoridis & Konstantinos, Koutroumbas, 2008; Friedman, 1989; Fukunaga, 1990).
In the last decade many methods have been proposed to solve this problem (Belhumeur, Hespanha, & Kriegman, 1997; Liu, & Wechsler, 1998; Yu & Yang, 2001; Chen, Liao, Lin, Ko & Yu, 2000; Huang, Liu, Lu & Ma, 2002). These methods have their problem respectively, which either remove the discriminant information useful to classification or has an expensive computing cost. Cevikalp11 et al. put forward a method called Discriminative Common Vector (DCV) and it has solved the above problems successfully. However, when the authors directly apply DCV on sample images with high-dimensions, the computational expense of training is still relatively large.