Parameterized Discriminant Analysis Methods

Parameterized Discriminant Analysis Methods

David Zhang (Hong Kong Polytechnic University, Hong Kong), Fengxi Song (New Star Research Institute Of Applied Technology, China), Yong Xu (Harbin Institute of Technology, China) and Zhizhen Liang (Shanghai Jiao Tong University, China)
DOI: 10.4018/978-1-60566-200-8.ch005
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

In this chapter, we mainly present three kinds of weighted LDA methods. In Sections 5.1, 5.2 and 5.3, we respectively present parameterized direct linear discriminant analysis, weighted nullspace linear discriminant analysis and weighted LDA in the range of within-class scatter matrix. We offer a brief summery of the chapter in Section 5.4.
Chapter Preview
Top

Parameterized Direct Linear Discriminant Analysis

Introduction

Direct LDA (D-LDA) (Yu & Yang, 2001) is an important feature extraction method for SSS problems. It first maps samples into the range of the between-class scatter matrix, and then transforms these projections using a series of regulating matrices. D-LDA can efficiently extract features directly from a high-dimensional input space without the need to first apply other dimensionality reduction techniques such as PCA transformations in Fisherfaces (Belhumeur, Hespanha, & Kriengman, 1997) or pixel grouping in nullspace LDA (N-LDA) (Chen, Liao, Ko, Lin, & Yu, 2000), and as a result has aroused the interest of many researchers in the field of pattern recognition and computer vision. Indeed, there are now many extensions of D-LDA, such as fractional D-LDA (Lu, Plataniotis, & Venetsanopoulos, 2003a), regularized D-LDA (Lu, Plataniotis, &Venetsanopoulos, 2003b; Lu, Plataniotis, & Venetsano-poulos, 2005), kernel D-LDA (Lu, Plataniotis, & Venetsanopoulos, 2003c), and boosting D-LDA (Lu, Plataniotis, Venetsanopoulos, & Li, 2006).

But there nonetheless remain some questions as to its usefulness as a facial feature extraction method. First, as been pointed out in Lu, Plataniotis and Venetsanopoulos (2003b; Lu, Plataniotis, & Venetsanopoulos, 2005), D-LDA performs badly when only two or three samples per individual are used. Second, regulating matrices in D-LDA are either redundant or probably harmful. The second drawback of D-LDA has not been seriously addressed in previous studies.

In this section, we present a new feature extraction method—parameterized direct linear discriminant analysis (PD-LDA) for SSS problems (Song, Zhang, Wang, Liu, & Tao, 2007). As an improvement of D-LDA, PD-LDA inherits advantages of D-LDA such as “direct” and “efficient”. Meanwhile, it greatly enhances the accuracy and robustness of D-LDA.

Top

Direct Linear Discriminant Analysis

The Algorithm of D-LDA

Let and denote the between- and the within-class scatter matrices respectively. The calculation procedure of D-LDA is as follows:

Step 1. Perform eigenvalue decomposition on the between-class scatter matrix

Let be the eigenvalue matrix of in decreasing order and be the corresponding eigenvector matrix. It follows that

. (1)

Let be the rank of the matrix . Let and , and we have

. (2)

Step 2. Map each sample vector to get its intermediate representation using the projection matrix

Step 3. Perform eigenvalue decomposition on the within-class scatter matrix of the projected samples, which is given by

. (3)

Let be the eigenvalue matrix of in ascending order and be the corresponding eigenvector matrix. It follows that

. (4)

Step 4. Calculate the discriminant matrix and map each sample to The discriminant matrix of D-LDA is given by

. (5)

Complete Chapter List

Search this Book:
Reset