Composition of Local Normal Coordinates and Polyhedral Geometry in Riemannian Manifold Learning

Composition of Local Normal Coordinates and Polyhedral Geometry in Riemannian Manifold Learning

Gastão F. Miranda Jr., Gilson Giraldi, Carlos E. Thomaz, Daniel Millàn
Copyright: © 2015 |Pages: 32
DOI: 10.4018/ijncr.2015040103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The Local Riemannian Manifold Learning (LRML) recovers the manifold topology and geometry behind database samples through normal coordinate neighborhoods computed by the exponential map. Besides, LRML uses barycentric coordinates to go from the parameter space to the Riemannian manifold in order to perform the manifold synthesis. Despite of the advantages of LRML, the obtained parameterization cannot be used as a representational space without ambiguities. Besides, the synthesis process needs a simplicial decomposition of the lower dimensional domain to be efficiently performed, which is not considered in the LRML proposal. In this paper, the authors address these drawbacks of LRML by using a composition procedure to combine the normal coordinate neighborhoods for building a suitable representational space. Moreover, they incorporate a polyhedral geometry framework to the LRML method to give an efficient background for the synthesis process and data analysis. In the computational experiments, the authors verify the efficiency of the LRML combined with the composition and discrete geometry frameworks for dimensionality reduction, synthesis and data exploration.
Article Preview
Top

1. Introduction

Many areas such as computer vision, signal processing and medical image analysis require the managing of data sets with a large number of features or dimensions. Therefore, dimensionality reduction may be necessary in order to discard redundancy and reduce the computational cost of further operations, Lee & Verleysen (2007).

We may distinguish two major classes of dimensionality reduction methods: linear and nonlinear. The former includes the classical principal component analysis (PCA), linear discriminant analysis (LDA) and multidimensional scaling (MDS), Engel et al. (2012), Hastie et al. (2001), Cox & Cox (2001). Linear techniques seek for new variables that obey some optimization criterium and can be expressed as linear combination of the original ones. That is why they fail if the input data has curved or nonlinear structures. These methods can be also classified as subspace learning methods in the sense that the output linear space has an optimum subspace for compact data representation.

In this paper we focus on nonlinear dimensionality reduction methods that can be classified into global and local categories. The kernel PCA (KPCA), kernel LDA (KLDA) and kernel Fisher discriminant analysis (KFD) are known global nonlinear dimensionality reduction methods that map the original input data into a feature space by a (global) non-linear mapping, where inner products in the feature space can be computed by a kernel function in the input space without explicitly knowing the non-linear mapping, Baudat & Anouar (2000), Park & Park (2005), Scholkopf et al. (1998). Laplacian Eigenmap and Isomap can be also considered as global techniques because they work on global structures computed through a graph associated with the whole database samples, Lee and Verleysen (2007), Belkin & Niyogi (2003), Tenenbaum et al. (2000). On the other hand, local methods attempt to preserve the structure of the data by seeking to map nearby data points into nearby points in the low-dimensional representation. Then, the global manifold information is recovered by minimizing the overall reconstruction error. Traditional manifold learning techniques like Locally Linear Embedding (LLE) and Local Tangent Space Alignment (LTSA) and Hessian Eigenmaps, as well as the more recent Local Riemannian Manifold Learning (LRML), belong to this category of nonlinear dimensionality reduction methods, Goldberg et al. (2008), Roweis & Saul (2000), Junior et al. (2013).

The main point behind manifold learning techniques is the assumption that the input data lies on a low-dimensional manifold embedded in a high dimensional space. Therefore, we need to learn the underlying intrinsic manifold geometry in order to address the problem of dimensionality reduction. Thus, instead of seeking for an optimum linear subspace the manifold learning methods try to discover an embedding procedure that describes the intrinsic similarities of the data. Manifold-based high dimensional data analysis has been applied in several problems related, for instance, to face analysis, pattern recognition, age estimation, character recognition, computer vision and hyperspectral data, Lunga et al. (2014), Lee & Verleysen (2007), Zhang et al. (2004), Lin & Zha (2008a).

Complete Article List

Search this Journal:
Reset
Volume 12: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 11: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 10: 4 Issues (2021)
Volume 9: 4 Issues (2020)
Volume 8: 4 Issues (2019)
Volume 7: 4 Issues (2018)
Volume 6: 2 Issues (2017)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing