Article Preview
Top1. Introduction
High dimensional data such as Microarray data consist of thousands of attributes or dimensions (Piao et al., 2014), and it is usually very difficult to mine high dimensional data. One of the major problems with high-dimensional datasets is that, in many cases, not all the measured variables are important for understanding the underlying phenomena of interest (Hall, 2006; Piao et al., 2012). In many applications, reducing the number of dimensions of the original data prior to any modelling is desirable (Guyon et al., 2002; Han et al., 2012). Feature selection is a method that can reduce both the data and its associated computational complexity, thereby making it a frequently used pre-processing step for various tasks in machine learning (Yu & Liu, 2003). It is a process of selecting a subset of original features so that the feature space is optimally reduced according to a certain evaluation criterion.
Feature selection is proven to be effective in removing irrelevant and redundant features, increasing efficiency in learning tasks, improving learning performance like predictive accuracy, and enhancing comprehensibility of learned results (Wu et al., 2012). In recent years, data are becoming increasingly voluminous in both number of instances and dimension of attributes in many applications such as genome projects (Golub et al., 1999; Xing et al., 2001), text categorization (Yang & Pederson, 1997; Zaghloul et al., 2009), image retrieval (Rui et al., 1999), and customer relationship management (Ng & Liu, 2000) etc. and feature selection is an essential component to mine important knowledge from those data. For the last two decades various feature selection techniques were successfully adopted for microarray data analysis to explore gene expressions and mine important biological knowledge from them. In (Ghosh et al., 2019), a Recursive Memetic Algorithm (RMA) was applied over seven microarray data to identify the biomarkers, and, to discover various biological terms including Gene Ontology (GO), Transcription Factor (TF) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways to expose the relationship between the detected biomarkers and association with the concerned cancers. Xu et al. (Xu et al., 2020) applied a multi-scale supervised clustering-based feature selection algorithm (MCBFS) for finding the relevant features and removing redundant ones from genome data, additionally, they have developed a general framework named McbfsNW using gene expression data and protein-protein interactions (PPIs) to identify the robust biomarkers and therapeutic targets for diagnosis and therapy of diseases.
Feature selection algorithms largely fall into two broad categories, i.e., wrapper methods and filter methods (Das, 2001; Kohavi & John, 1997). Wrapper methods require one predominant learning algorithm and it follows a global greedy search approach to evaluate all possible combination of features (Jain et al., 2018). Possible number of solutions for a dataset of n features are 2n, and the complexity increases as number of features increases. Hence, wrapper methods are computationally expensive (Dashtban & Balafar, 2017). On the other hand, filter methods rely on training data for selecting relevant features without involving learning algorithms, and, hence it is much faster than wrapper methods. Feature selection and classification of microarray data puts severe challenges for scientists because of the following reasons.