Equivalence Between LDA/QR and Direct LDA

Equivalence Between LDA/QR and Direct LDA

Rong-Hua Li, Shuang Liang, George Baciu, Eddie Chan
DOI: 10.4018/jcini.2011010106
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Singularity problems of scatter matrices in Linear Discriminant Analysis (LDA) are challenging and have obtained attention during the last decade. Linear Discriminant Analysis via QR decomposition (LDA/QR) and Direct Linear Discriminant analysis (DLDA) are two popular algorithms to solve the singularity problem. This paper establishes the equivalent relationship between LDA/QR and DLDA. They can be regarded as special cases of pseudo-inverse LDA. Similar to LDA/QR algorithm, DLDA can also be considered as a two-stage LDA method. Interestingly, the first stage of DLDA can act as a dimension reduction algorithm. The experiment compares LDA/QR and DLDA algorithms in terms of classification accuracy, computational complexity on several benchmark datasets and compares their first stages. The results confirm the established equivalent relationship and verify their capabilities in dimension reduction.
Article Preview
Top

1. Introduction

Classical Linear Discriminant Analysis (LDA) algorithm usually suffers from the singularity problem (Ye et al., 2004; Chen et al., 2000) of scatter matrices. Linear Discrimiant Analysis via QR decomposition (LDA/QR) (Ye & Qi, 2000; Ye & Li, 2004) and Direct Linear Discriminant Analysis (DLDA) (Yu & Yang, 2005) are two LDA algorithms to solve this singularity problem. This paper proves the equivalence relationship between these two LDA algorithms.

LDA/QR is a LDA extension for coping with the singularity problem and was developed in (Ye & Qi, 2000). It improves the efficiency by introducing QR decomposition on a small-sized matrix, while achieving competitive discriminant performance. There are two stages-of-process in LDA/QR: 1) In the first stage, the separation between different classes is maximized via applying QR decomposition to a small-sized matrix. Remind that this stage can be used independently as a dimension reduction algorithm and it is called Pre-LDA/QR (Ye & Li, 2004); 2) The second stage refines the result by applying LDA to the so-called reduced scatter matrices (Ye & Qi, 2000) resulting from the first stage. The solution by LDA/QR has been proved to be equivalent to that by generalized LDA named pseudo-inverse LDA (Ye et al., 2004; Zhang et al., 2005).

DLDA (Yu & Yang, 2005) bases on the observation that the null space of between-class scatter matrix carries little useful discriminative information (Chen et al., 2000). Accordingly, it first diagonalizes between-class scatter matrix jcini.2011010106.m01 and then adopts traditional eigen-analysis to discard the zero eigenvalues together with their corresponding eigenvectors. Then, it diagonalizes within-class scatter matrix jcini.2011010106.m02 but keeps the zero eigenvalues since its null space carries the most useful discriminant information. Hence DLDA can make use of all discriminant information. One advantage of DLDA is that it can be applied to high-dimensional input space directly without any intermediate dimension reduction stage, such as PCA (Martinez & Kak 2001; Turk & Pentland, 1996; Yang & Yang, 2006), which has been used in the preprocessing stage of a well-known face recognition algorithm, namely fisherfaces (Belhumeur et al., 1997; Jing et al., 2006).

Complete Article List

Search this Journal:
Reset
Volume 18: 1 Issue (2024)
Volume 17: 1 Issue (2023)
Volume 16: 1 Issue (2022)
Volume 15: 4 Issues (2021)
Volume 14: 4 Issues (2020)
Volume 13: 4 Issues (2019)
Volume 12: 4 Issues (2018)
Volume 11: 4 Issues (2017)
Volume 10: 4 Issues (2016)
Volume 9: 4 Issues (2015)
Volume 8: 4 Issues (2014)
Volume 7: 4 Issues (2013)
Volume 6: 4 Issues (2012)
Volume 5: 4 Issues (2011)
Volume 4: 4 Issues (2010)
Volume 3: 4 Issues (2009)
Volume 2: 4 Issues (2008)
Volume 1: 4 Issues (2007)
View Complete Journal Contents Listing