Article Preview
Top1. Introduction
Performance improvement is the challenge in supervised classification. The main objective is to generate a good pattern for the classification task. This can be done, either by improving the learning algorithm, or the quality of the data. Different types of preprocessing, like replacement of noisy and missing values, discretization or feature selection, can improve the quality of data.
Microarrays have been a source of data for a wide range of biomedical investigations. They are useful to distinguish or to diagnostic different types of disease (Saeys, Inza, & Larranaga, 2007). A simple classification task consists to separate healthy patients from cancer patients (Bolon-Canedo, Sanchez-Marono, Alonso-Betanzos, Benitez, & Herrera, 2014). These datasets are characterized by a large number of genes associated to a low number of samples. This difference can causes over fitting problems to the classifier, and require a high computational run time. In addition, this type of datasets is noisy and complex (Alshamlan, Badr & Alohali,2015) which disrupts the classification task and reduce the performance of the classifier. The selection of the most relevant genes and performance improvement in microarray dataset is a challenging task because an important number of genes are irrelevant (Dashtban & Balafar, 2017).
Feature (or gene) selection aims to select a subset of m pertinent attributes were m < N, N is the number of features in the original set. By excluding, the irrelevant, noisy and redundant features (Bolon-Canedo, Sanchez-Maro, & Alonso-Betanzos, 2015), it reduces the dimensionality of the learning dataset, which makes the classification model more appropriate, and reduce the learning time complexity.
Several approaches have been proposed in the literature, to solve the feature selection problem. We distinguish three types of methods: wrapper, filter and hybrid or embedded methods. The filter methods select the most important feature using statistical measures to calculate the pertinence of the selected genes subset. They are characterized by their low computational time (Apolloni, Leguizamón, & Alba, 2016), because they don’t use the classifier. In the other hand, the wrapper approaches connect the learning algorithm with some optimization algorithms such as metaheuristics to select the best subsets (Lv, Peng, Chen, & Sun, 2016). Because they use the performance of the classifier as a value for the fitness function, these methods are better performing compared to the filter ones. The third methods called embedded, combine between the characteristics of the wrapper and the filter methods. The process of selecting the best subsets is done in parallel with the learning process, such as the case of tree algorithms; they are specific to a particular learning algorithm (Zhang, &Deng, 2007). Table1 resume some approaches used in gene selection.
The task of finding the relevant subsets is an NP-hard problem due to the large number of subsets to examine. Therefore, it requires the use of optimization methods like metaheuristics.