Comparison of Methods to Display Principal Component Analysis, Focusing on Biplots and the Selection of Biplot Axes

Comparison of Methods to Display Principal Component Analysis, Focusing on Biplots and the Selection of Biplot Axes

Carla Barbosa, M. Rui Alves, Beatriz Oliveira
DOI: 10.4018/978-1-4666-8823-0.ch010
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Principal components analysis (PCA) is probably the most important multivariate statistical technique, being used to model complex problems or just for data mining, in almost all areas of science. Although being well known by researchers and available in most statistical packages, it is often misunderstood and poses problems when applied by inexperienced users. A biplot is a way of concentrating all information related to sample units and variables in a single display, in an attempt to help interpretations and avoid overestimations. This chapter covers the main mathematical aspects of PCA, as well as the form and covariance biplots developed by Gabriel and the predictive and interpolative biplots devised by Gower and coworkers. New developments are also presented, involving techniques to automate the production of biplots, with a controlled output in terms of axes predictivities and interpolative accuracies, supported by the AutoBiplot.PCA function developed in R. A practical case is used for illustrations and discussions.
Chapter Preview
Top

1. Introduction

Principal components analysis, which will be referred to as PCA, is one of the most important multivariate analysis, being used in profusion by researchers throughout the world and in many fields of science. Its origins may be traced back to Pearson (1901) and mainly to Hotelling (1933a, 1933b) who gave it the current approach through the eigenvalue decomposition (or spectral decomposition) of a covariance matrix. With the advent of powerful computers and software, PCA is still the basis for many developments in multivariate statistics. Among the many references available on PCA, the review by Dunteman (1989) is recommended for a quick overview, as well as the textbook by Jolliffe (2002), which is a comprehensive approach to PCA, including its history, mathematical developments, examples of applications and relationships with other multivariate techniques. The importance of PCA is due to the fact that it tries to simplify complex data matrices by reducing the number of variables necessary for interpretation of a given problem, in a process known as parsimonious description of the data, reduction in dimension, or data compression. This simplification is achieved without losing relevant information.

However, although being a well-known statistical technique, PCA encloses some problems that are seldom forgotten and sometimes not well understood. One of the main problems of PCA is that final solutions usually require interpretations subject to some individual judgement, which may easily lead non-statisticians to erroneous conclusions, among which overestimations are the most common problem. Moreover, with the use of modern computers and sophisticated statistical software, these problems may assume larger proportions, unless some mathematical work is carried out in order to control and automate PCA outputs.

After analyzing the advantages, problems and pitfalls of PCA, biplots will be presented, starting with a special reference to the pioneer work of Gabriel (1971, 1972, 1981). The approaches presented in the book by Greenacre (2010), with reviews on Gabriel's biplots and extensions to almost all multivariate analyses, will be followed. These biplots are a way of displaying PCA results in a single graph containing both the information on variables and on sample units. In this way, it is possible to interpret PCA results by relating sample units directly to initial measurement variables, removing the need for the intermediate step of principal component interpretation, thus reducing to some extent the randomness in judgements. As this type of biplot developed by Gabriel (1971) is now-a-days available in many statistical packages, e.g., Statistica (Statsoft, 2014) and SPSS (IBM, 2014) it deserves some attention, and its variants, advantages and pitfalls will be highlighted. In order to produce Gabriel’sbiplots, a function was built by the authors in the R language, called Gabriel.PCA, which produces two types of Gabriel’s biplots. This function is available to interested users.

Key Terms in this Chapter

Predictivity: Important in predictive biplots . The process by which the original value of a sample unit in relation to a variable can be read-off directly from its position in a plot

MSIE: Stands for mean standard interpolative error . Is a measure of the absolute error expected to be committed in the interpolative biplot

Biplot: A methodology developed in order to enable the merge two plot in a single plot: one plot refers to the relationships between sample units and underlying factor; the other plot refers to the relationships between measuring (declared) variables and underlying factors

PE: Stands for Predictive error . The actual absolute error committed when reading-off the value of a sample unit in relation to a predictive biplot axis

MSPE: Stands for mean standard predictive error . Considering all sample units in a predictive biplot, if readings are carried out for all units in relation to a given biplot axis, the average error that is commited, as evaluated by the algorithm in the standardized data.

IAI: Stands for interpolative accuracy index . It is a standard measure of how accurate interpolation may be, and can be used to evaluate the impact of the introduction of new biplot axes

PCA: Stands for principal components analysis - a multivariate analysis applied to a matrix with no groups defined, that carries out data compression to the most relevant underlying factors

Complete Chapter List

Search this Book:
Reset