Class-Dependent Principal Component Analysis

Class-Dependent Principal Component Analysis

Oleg Okun
Copyright: © 2014 |Pages: 11
DOI: 10.4018/978-1-4666-5202-6.ch042
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Chapter Preview

Top

Introduction

Principal Component Analysis (PCA) (Jolliffe, 2002) is one of the popular methods for dimensionality reduction that is often used in Predictive Analytics tasks. However, it is an unsupervised technique as it ignores class membership information when projecting data into a lower dimensional space from the original space. Therefore when PCA precedes data classification, one cannot always be certain if classification in the reduced space is more accurate than that in the original space as dimensionality reduction is unrelated to classification.

To address this problem, various authors proposed supervised PCA that utilizes the information about labels of instances (Bair, Hastie, Paul, & Tibshirani, 2006; Chen, Wang, Smith, & Zhang, 2008; Das & Nenadic, 2008; Barshan, Ghodsi, Azimifar, & Jahromi, 2011; Wu, Bowers, Huynh, & Souvenir, 2013; Cai, et al., 2013).

As an example of this type of algorithms, the work of Das and Nenadic (2008) is presented in detail in this chapter. Das & Nenadic (2008) proposed an algorithm, where principal subspace is found for each class of data, independently of other classes. Test data are then projected into each principal subspace and the Bayes rule judges which class in which subspace is associated with the maximum posterior probability. Thus, dimensionality reduction is combined with classification.

Das and Nenadic argued that partitioning the original space onto multiple linear subspaces leads to more accurate classification results than the conventional wholistic PCA where only one linear subspace is used for all classes of data. Their motivation was based on the assumption that the projection onto a single linear subspace will be inadequate if different classes are highly overlapped. In this case, class-dependent PCA would have better chances to succeed where the class-ignorant PCA failed.

Key Terms in this Chapter

Class-Dependent PCA: A type of Principal Component Analysis (PCA) that relies on class labels.

Principal Component Analysis (PCA): A dimensionality reduction method that transforms a set of possibly correlated variables into a new set of uncorrelated variables called principal components, each of which is a linear combination of the original variables. The first principal component has the largest possible variance; the second principal component is orthogonal to the first one and has the second largest variance, etc.

Gram-Schmidt Process: A method for orthonormalizing a set of vectors spanning an inner product space.

Classification: A type of computational problems where the goal is to assign an observation or instance to one of known classes of data.

Bayes Rule: A formula for revising and updating the probability of some event in the light of new evidence where P ( A | B j ) is the conditional probability of event A conditional on event B j and B 1 , B 2 ,…, B k are mutually exclusive and exhaustive events. P ( B j ) is the prior probability and P ( B j | A ) is the posterior probability.

Dimensionality Reduction: A transformation that reduces the number of variables describing each observation or instance.

Complete Chapter List

Search this Book:
Reset