Feature Selection and Ranking

Feature Selection and Ranking

Boris Igelnik
DOI: 10.4018/978-1-60960-551-3.ch015
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter describes a method of feature selection and ranking based on human expert knowledge and training and testing of a neural network. Being computationally efficient, the method is less sensitive to round-off errors and noise in the data than the traditional methods of feature selection and ranking grounded on the sensitivity analysis. The method may lead to a significant reduction of a search space in the tasks of modeling, optimization, and data fusion.
Chapter Preview
Top

Background

The CEA method (Igelnik et al., 1995-2003; Igelnik, 2009) was eventually developed for adaptive dynamic modeling of large sets of time-variant multidimensional data. It includes different neural network architectures both traditional (Haykin, 1994; Bishop, 1995) and not-traditional ones (Igelnik & Parikh, 2003a; Igelnik 2009). The latter are based on the use of splines (Prenter, 1975; Bartels et al, 1987) and have the basic functions with adaptively adjusted shape and parameters, while the former have fixed-shape basis functions and adaptively adjusted parameters. We do not utilize non-traditional architectures in this chapter, but make use of the traditional radial basis functions (RBF) architecture with the Gaussian activation function.

Complete Chapter List

Search this Book:
Reset