Article Preview
TopIntroduction
High-dimensional datasets like colon, prostate, and others have the property that numbers of patterns are very less compared to the number of features. Classification (or prediction) of such high-dimensional datasets is very problematic due to a number of features, hence in the recent decades, researchers have focused on feature selection techniques with more intensity. Classification is an indispensable part of data mining (Han & Kamber, 2006), machine learning (Mitchell, 1997) or pattern recognition (Duda et al., 2001) and it is defined as labeling of an unseen pattern based on some information or rules. Classifiers like Support Vector Machines (Cortes & Vapnik, 1995), Naive Bayes (John & Langley, 1995), Artificial Neural Networks (Haykin, 1999) and others take training data and train themselves when an unseen pattern is given to them they labeled those patterns. Since classifiers performance is dependent upon the training data hence elimination of irrelevant, redundant and noisy features is very necessary. Classification accuracy can be increased if non-redundant, relevant and noise free dataset is used for learning. On the contrary, if irrelevant, redundant and noisy features are present in the dataset, it will decrease the classifier performance (accuracy commonly) and often it is termed as Curse of Dimensionality (Jain & Zongker, 1997). Removing irrelevant, redundant and noisy features is termed as Dimensionality Reduction (Liu & Motoda, 1998; Guyon et al., 2006; Saxena et al., 2009). Feature selection (Liu & Motoda, 1998; Saxena et al., 2009) and feature extraction are two well-known methods applied for the dimensionality reduction problem. Processing the existing features of the dataset to obtain new features is termed as feature extraction while the selection of a subset of features from existing set of features without a single extra effort is known as feature selection. In this paper, a novel method is proposed to achieve feature selection in databases. The extensive applications of feature selection can be found in medical (Artioli et al, 1995) data mining (Han & Kamber, 2006), classification and other areas (Maimon & Rokach, 2010).
In this proposed method, cosine similarity and mutual information is hybridized to remove the redundant and irrelevant features. Supervised cosine similarity is used as a measure to group the features and then information gain is used to select a best relevant feature from each group. This method provides a way to remove the redundant features.
This paper is organized into following sections. Section 2 introduces related works on feature selection algorithms. Some preliminaries about the terms used in the paper at later stages viz. A measure of similarities and Mutual Information or information gain is given in Section 3. The proposed method is explained via algorithms and model in Section 4. Section 5 lists and briefly explains datasets used in the experiment and experiments. Section 6 and 7 contains results derived from proposed method on listed datasets with a comparison with other methods. Section -8 concludes the paper with the future scope.