Introduction
Direct LDA (D-LDA) (Yu & Yang, 2001) is an important feature extraction method for SSS problems. It first maps samples into the range of the between-class scatter matrix, and then transforms these projections using a series of regulating matrices. D-LDA can efficiently extract features directly from a high-dimensional input space without the need to first apply other dimensionality reduction techniques such as PCA transformations in Fisherfaces (Belhumeur, Hespanha, & Kriengman, 1997) or pixel grouping in nullspace LDA (N-LDA) (Chen, Liao, Ko, Lin, & Yu, 2000), and as a result has aroused the interest of many researchers in the field of pattern recognition and computer vision. Indeed, there are now many extensions of D-LDA, such as fractional D-LDA (Lu, Plataniotis, & Venetsanopoulos, 2003a), regularized D-LDA (Lu, Plataniotis, &Venetsanopoulos, 2003b; Lu, Plataniotis, & Venetsano-poulos, 2005), kernel D-LDA (Lu, Plataniotis, & Venetsanopoulos, 2003c), and boosting D-LDA (Lu, Plataniotis, Venetsanopoulos, & Li, 2006).
But there nonetheless remain some questions as to its usefulness as a facial feature extraction method. First, as been pointed out in Lu, Plataniotis and Venetsanopoulos (2003b; Lu, Plataniotis, & Venetsanopoulos, 2005), D-LDA performs badly when only two or three samples per individual are used. Second, regulating matrices in D-LDA are either redundant or probably harmful. The second drawback of D-LDA has not been seriously addressed in previous studies.
In this section, we present a new feature extraction method—parameterized direct linear discriminant analysis (PD-LDA) for SSS problems (Song, Zhang, Wang, Liu, & Tao, 2007). As an improvement of D-LDA, PD-LDA inherits advantages of D-LDA such as “direct” and “efficient”. Meanwhile, it greatly enhances the accuracy and robustness of D-LDA.