Reference Hub25
A Survey of Feature Selection Techniques

A Survey of Feature Selection Techniques

Barak Chizi, Lior Rokach, Oded Maimon
Copyright: © 2009 |Pages: 8
ISBN13: 9781605660103|ISBN10: 1605660108|EISBN13: 9781605660110
DOI: 10.4018/978-1-60566-010-3.ch289
Cite Chapter Cite Chapter

MLA

Chizi, Barak, et al. "A Survey of Feature Selection Techniques." Encyclopedia of Data Warehousing and Mining, Second Edition, edited by John Wang, IGI Global, 2009, pp. 1888-1895. https://doi.org/10.4018/978-1-60566-010-3.ch289

APA

Chizi, B., Rokach, L., & Maimon, O. (2009). A Survey of Feature Selection Techniques. In J. Wang (Ed.), Encyclopedia of Data Warehousing and Mining, Second Edition (pp. 1888-1895). IGI Global. https://doi.org/10.4018/978-1-60566-010-3.ch289

Chicago

Chizi, Barak, Lior Rokach, and Oded Maimon. "A Survey of Feature Selection Techniques." In Encyclopedia of Data Warehousing and Mining, Second Edition, edited by John Wang, 1888-1895. Hershey, PA: IGI Global, 2009. https://doi.org/10.4018/978-1-60566-010-3.ch289

Export Reference

Mendeley
Favorite

Abstract

Dimensionality (i.e., the number of data set attributes or groups of attributes) constitutes a serious obstacle to the efficiency of most data mining algorithms (Maimon and Last, 2000). The main reason for this is that data mining algorithms are computationally intensive. This obstacle is sometimes known as the “curse of dimensionality” (Bellman, 1961). The objective of Feature Selection is to identify features in the data-set as important, and discard any other feature as irrelevant and redundant information. Since Feature Selection reduces the dimensionality of the data, data mining algorithms can be operated faster and more effectively by using Feature Selection. In some cases, as a result of feature selection, the performance of the data mining method can be improved. The reason for that is mainly a more compact, easily interpreted representation of the target concept. The filter approach (Kohavi , 1995; Kohavi and John ,1996) operates independently of the data mining method employed subsequently -- undesirable features are filtered out of the data before learning begins. These algorithms use heuristics based on general characteristics of the data to evaluate the merit of feature subsets. A sub-category of filter methods that will be refer to as rankers, are methods that employ some criterion to score each feature and provide a ranking. From this ordering, several feature subsets can be chosen by manually setting There are three main approaches for feature selection: wrapper, filter and embedded. The wrapper approach (Kohavi, 1995; Kohavi and John,1996), uses an inducer as a black box along with a statistical re-sampling technique such as cross-validation to select the best feature subset according to some predictive measure. The embedded approach (see for instance Guyon and Elisseeff, 2003) is similar to the wrapper approach in the sense that the features are specifically selected for a certain inducer, but it selects the features in the process of learning.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.