Biomarker Identification From Gene Expression Based on Symmetrical Uncertainty

Biomarker Identification From Gene Expression Based on Symmetrical Uncertainty

Emon Asad, Ayatullah Faruk Mollah
Copyright: © 2021 |Pages: 19
DOI: 10.4018/IJIIT.289966
Article PDF Download
Open access articles are freely available for download

Abstract

In this paper, the authors present an effective information theoretic feature selection method, symmetrical uncertainty, to classify gene expression microarray data and detect biomarkers from it. Here, information gain and symmetrical uncertainty contribute for ranking the features. Based on computed values of symmetrical uncertainty, features were sorted from most informative to least informative ones. Then, the top features from the sorted list are passed to random forest, logistic regression, and other well-known classifiers with leave-one-out cross validation to construct the best classification model(s) and accordingly select the most important genes from microarray datasets. Obtained results in terms of classification accuracy, running time, root mean square error, and other parameters computed on leukemia and colon cancer datasets demonstrate the effectiveness of the proposed approach. The proposed method is relatively much faster than many other wrapper or ensemble methods.
Article Preview
Top

1. Introduction

High dimensional data such as Microarray data consist of thousands of attributes or dimensions (Piao et al., 2014), and it is usually very difficult to mine high dimensional data. One of the major problems with high-dimensional datasets is that, in many cases, not all the measured variables are important for understanding the underlying phenomena of interest (Hall, 2006; Piao et al., 2012). In many applications, reducing the number of dimensions of the original data prior to any modelling is desirable (Guyon et al., 2002; Han et al., 2012). Feature selection is a method that can reduce both the data and its associated computational complexity, thereby making it a frequently used pre-processing step for various tasks in machine learning (Yu & Liu, 2003). It is a process of selecting a subset of original features so that the feature space is optimally reduced according to a certain evaluation criterion.

Feature selection is proven to be effective in removing irrelevant and redundant features, increasing efficiency in learning tasks, improving learning performance like predictive accuracy, and enhancing comprehensibility of learned results (Wu et al., 2012). In recent years, data are becoming increasingly voluminous in both number of instances and dimension of attributes in many applications such as genome projects (Golub et al., 1999; Xing et al., 2001), text categorization (Yang & Pederson, 1997; Zaghloul et al., 2009), image retrieval (Rui et al., 1999), and customer relationship management (Ng & Liu, 2000) etc. and feature selection is an essential component to mine important knowledge from those data. For the last two decades various feature selection techniques were successfully adopted for microarray data analysis to explore gene expressions and mine important biological knowledge from them. In (Ghosh et al., 2019), a Recursive Memetic Algorithm (RMA) was applied over seven microarray data to identify the biomarkers, and, to discover various biological terms including Gene Ontology (GO), Transcription Factor (TF) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways to expose the relationship between the detected biomarkers and association with the concerned cancers. Xu et al. (Xu et al., 2020) applied a multi-scale supervised clustering-based feature selection algorithm (MCBFS) for finding the relevant features and removing redundant ones from genome data, additionally, they have developed a general framework named McbfsNW using gene expression data and protein-protein interactions (PPIs) to identify the robust biomarkers and therapeutic targets for diagnosis and therapy of diseases.

Feature selection algorithms largely fall into two broad categories, i.e., wrapper methods and filter methods (Das, 2001; Kohavi & John, 1997). Wrapper methods require one predominant learning algorithm and it follows a global greedy search approach to evaluate all possible combination of features (Jain et al., 2018). Possible number of solutions for a dataset of n features are 2n, and the complexity increases as number of features increases. Hence, wrapper methods are computationally expensive (Dashtban & Balafar, 2017). On the other hand, filter methods rely on training data for selecting relevant features without involving learning algorithms, and, hence it is much faster than wrapper methods. Feature selection and classification of microarray data puts severe challenges for scientists because of the following reasons.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 1 Issue (2023)
Volume 18: 4 Issues (2022): 3 Released, 1 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing