An Empirical Evaluation of Feature Selection Methods

An Empirical Evaluation of Feature Selection Methods

Mohsin Iqbal, Saif Ur Rehman, Saira Gillani, Sohail Asghar
DOI: 10.4018/978-1-4666-8513-0.ch012
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The key objective of the chapter would be to study the classification accuracy, using feature selection with machine learning algorithms. The dimensionality of the data is reduced by implementing Feature selection and accuracy of the learning algorithm improved. We test how an integrated feature selection could affect the accuracy of three classifiers by performing feature selection methods. The filter effects show that Information Gain (IG), Gain Ratio (GR) and Relief-f, and wrapper effect show that Bagging and Naive Bayes (NB), enabled the classifiers to give the highest escalation in classification accuracy about the average while reducing the volume of unnecessary attributes. The achieved conclusions can advise the machine learning users, which classifier and feature selection methods to use to optimize the classification accuracy, and this can be important, especially at risk-sensitive applying Machine Learning whereas in the one of the aim to reduce costs of collecting, processing and storage of unnecessary data.
Chapter Preview
Top

1. Introduction

Considering feature selection extensively in the field of theory, such as machine learning and data mining for wide applications in gene expression microarray analysis, image analysis and word processing. Feature Selection of crucial importance in these areas, because it helps to improve the performance of the device to predict learning models by eliminating variables redundant, irrelevant and noisy, and provide simpler models that facilitate the best explanation for a complex process of random, and provide cost a large amount of experimental measurements in practice, revealing subset of variables that can be studied closely to causal inference. Selection of feature (also known as variable selection, Subspace selection, or dimensional reduction) is Procedure to select a subset of the original feature set by eliminating redundant and less informative sub features so that it contains only the best Features discriminatory (Morita et al.; 2003). Feature selection works as (i) improve the prediction performance of the predictor, (ii) Helps more cost effective predictor and predictor do faster, and (iii) Provides a better understanding of the fundamental process that generates data (Guyon & Elisseeff. 2003).

The feature is irrelevant or noisy does not provide any valuable information to predict the concept of a goal and redundant feature does not add any additional information that may be useful for predicting the concept of goal (Dash & Liu, 1997). Feature subset selection helps in a number of ways, such as it reduces ineffective features to save time, computing and data storage, Features associated with enhanced performance and predictive prevents excessive manner, and provides a description of more than one occasion Target concept. Feature selection is a combinatorial optimization problem as it includes a feature set N Features can be very large, exclusive research. There are two types of feature selection method, i.e. filter method and wrapper method (Guyon & Elisseeff, 2003; Dash & Liu, 1997; Isabelle, 2003).

Filter based methods evaluate each feature independent through some classifiers e.g. statistical measure. As compared to other methods filter based method are light weighted, very efficient and fast to compute. Shed by hand wrapper based method assess the quality of a set of feature using a specific learning algorithm by internal cross-validation to evaluate the usefulness of a selected feature subset along with some search method (Heuristic search). Wrapper method is very slow, more expensive as compared to filter method, but wrapper method is best in terms of predictive accuracy (Yu & Huan, 2003).

Feature selection is furthermore useful within the data analysis process, as shows which features are for prediction, a lot more these features are related. Irrelevant features, using redundant features, severely affect the accuracy of this learning machine. Thus, feature subset selection is able to identify and remove because the irrelevant and redundant information as they possibly can. Many feature subset selection methods have been completely proposed and studied for machine learning applications. An existing feature selection approaches generally owned by these two categories: wrapper and filter. Wrappers include the target classifier as a part of their performance evaluation, while filters employ evaluation functions independent from the target classifier. Since wrappers train a classifier to evaluate each feature subset, they're just a whole lot more computationally intensive than filters. Hence, filters become more practical than wrappers in high-dimensional include spaces. Their computational complexity is low, nonetheless accuracy of this learning algorithm will be guaranteed.

From this chapter, we experiment with an alternative approach, which iteratively removes one after the other feature of this worst estimated quality. In each iteration it utilizes an important classifier model, which subsequently plays a great role with the procedure, to compute its accuracy. After performing all iterations, we buy a feature set, which enables the classifier to create its maximum classification accuracy of the guidance data. In this particular evaluation, we commence with discovering which feature selection method just about the most successful, i-e. Can enable the classifier to create its highest accuracy by removing a very high selection of unimportant features.

Complete Chapter List

Search this Book:
Reset