Reference Hub1
Adaptive Neural Algorithms for PCA and ICA

Adaptive Neural Algorithms for PCA and ICA

Radu Mutihac
Copyright: © 2009 |Pages: 9
ISBN13: 9781599048499|ISBN10: 1599048493|EISBN13: 9781599048505
DOI: 10.4018/978-1-59904-849-9.ch004
Cite Chapter Cite Chapter

MLA

Mutihac, Radu. "Adaptive Neural Algorithms for PCA and ICA." Encyclopedia of Artificial Intelligence, edited by Juan Ramón Rabuñal Dopico, et al., IGI Global, 2009, pp. 22-30. https://doi.org/10.4018/978-1-59904-849-9.ch004

APA

Mutihac, R. (2009). Adaptive Neural Algorithms for PCA and ICA. In J. Rabuñal Dopico, J. Dorado, & A. Pazos (Eds.), Encyclopedia of Artificial Intelligence (pp. 22-30). IGI Global. https://doi.org/10.4018/978-1-59904-849-9.ch004

Chicago

Mutihac, Radu. "Adaptive Neural Algorithms for PCA and ICA." In Encyclopedia of Artificial Intelligence, edited by Juan Ramón Rabuñal Dopico, Julian Dorado, and Alejandro Pazos, 22-30. Hershey, PA: IGI Global, 2009. https://doi.org/10.4018/978-1-59904-849-9.ch004

Export Reference

Mendeley
Favorite

Abstract

Artificial neural networks (ANNs) (McCulloch & Pitts, 1943) (Haykin, 1999) were developed as models of their biological counterparts aiming to emulate the real neural systems and mimic the structural organization and function of the human brain. Their applications were based on the ability of self-designing to solve a problem by learning the solution from data. A comparative study of neural implementations running principal component analysis (PCA) and independent component analysis (ICA) was carried out. Artificially generated data additively corrupted with white noise in order to enforce randomness were employed to critically evaluate and assess the reliability of data projections. Analysis in both time and frequency domains showed the superiority of the estimated independent components (ICs) relative to principal components (PCs) in faithful retrieval of the genuine (latent) source signals. Neural computation belongs to information processing dealing with adaptive, parallel, and distributed (localized) signal processing. In data analysis, a common task consists in finding an adequate subspace of multivariate data for subsequent processing and interpretation. Linear transforms are frequently employed in data model selection due to their computational and conceptual simplicity. Some common linear transforms are PCA, factor analysis (FA), projection pursuit (PP), and, more recently, ICA (Comon, 1994). The latter emerged as an extension of nonlinear PCA (Hotelling, 1993) and developed in the context of blind source separation (BSS) (Cardoso, 1998) in signal and array processing. ICA is also related to recent theories of the visual brain (Barlow, 1991), which assume that consecutive processing steps lead to a progressive reduction in the redundancy of representation (Olshausen and Field, 1996). This contribution is an overview of the PCA and ICA neuromorphic architectures and their associated algorithmic implementations increasingly used as exploratory techniques. The discussion is conducted on artificially generated sub- and super-Gaussian source signals.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.