Feature Selection for Designing a Novel Differential Evolution Trained Radial Basis Function Network for Classification

Feature Selection for Designing a Novel Differential Evolution Trained Radial Basis Function Network for Classification

Sanjeev Kumar Dash, Aditya Prakash Dash, Satchidananda Dehuri, Sung-Bae Cho
Copyright: © 2013 |Pages: 18
DOI: 10.4018/jamc.2013010103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This work presents a novel approach for classification of both balanced and unbalanced dataset by suitably tuning the parameters of radial basis function networks with an additional cost of feature selection. Inputting optimal and relevant set of features to a radial basis function may greatly enhance the network efficiency (in terms of accuracy) at the same time compact it size. In this paper, the authors use information gain theory (a kind of filter approach) for reducing the features and differential evolution for tuning center and spread of radial basis functions. The proposed approach is validated with a few benchmarking highly skewed and balanced dataset retrieved from University of California, Irvine (UCI) repository. The experimental study is encouraging to pursue further extensive research in highly skewed data.
Article Preview
Top

1. Introduction

It is being accepted that the accuracy of the discovered model (i.e., a neural networks (NNs) (Haykin, 1994), rules (Das, Roy, Dehuri, & Cho, 2011), decision tree (Carvalho & Freitas, 2004) strongly depends on the quality of the data being mined. Hence, feature selection one of the preprocessing tasks to obtain quality data brings lots of attention of many researchers (Battiti, 1994; Yan, Wang, & Xie, 2008). It is the process of selecting a subset of available features to use in empirical modeling. Like feature selection, instance selection (Liu & Motoda, 2002) is to choose a subset samples to achieve the original purpose of a classification tasks, as if the whole dataset is used. Many variants of evolutionary and non-evolutionary based approaches are discussed in Derrac, Garcia, and Herrera (2010). The ideal outcome of instance selection is a model independent, minimum sample of data that can accomplish tasks with little or no performance deterioration. However, in this work, we restrict ourselves with feature selection only.

Feature selection can be broadly classified into two categories: 1. filter approach (it depends on generic statistical measurement); and 2. wrapper approach (based on the accuracy of a specific classifier) (Aruna et al., 2012). In this work, the feature selection is performed based on information gain theory (entropy) measure with a goal to select a subset of features that preserves as much as possible the relevant information found in the entire set of features. After selection of the relevant set of features the fine tuned radial basis function network is modeled using differential evolution for classification.

Over the decade radial basis function (RBF) networks have attracted a lot of interest in various domain of interest (Haykin, 1994; Novakovic, 2011; Naveen, Ravi, Rao, & Chauhan, 2010; Liu, Mattila, & Lampinen, 2005). One reason is that they form a unifying link between function approximation, regularization, noisy interpolation, classification, and density estimation. It is also the case that training RBF networks is usually faster than training multi-layer perceptron networks. RBF network training usually proceeds in two steps: First, the basis function parameters (corresponding to hidden units) are determined by clustering. Second, the final-layer weights are determined by least square which reduces to solving a simple linear system. Thus, the first stage is an unsupervised method which is relatively fast, and the second stage requires the solution of a linear problem, which is also fast.

One of the advantages of RBF neural networks, compared to multi-layer perceptron networks, is the possibility of choosing suitable parameters for the units of hidden layer without having to perform a non-linear optimization of the network parameters. However, the problem of selecting the appropriate number of basis functions remains a critical issue for RBF networks. The number of basis functions controls the complexity, and hence the generalization ability of RBF networks. An RBF network with too few basis functions gives poor predictions on new data, i.e. poor generalization, since the model has limited flexibility. On the other hand, an RBF network with too many basis functions also yields poor generalization since it is too flexible and fits the noise in the training data. A small number of basis functions yields a high bias, low variance estimator, whereas a large number of basis functions yields a low bias but high variance estimator. The best generalization performance is obtained via a compromise between the conflicting requirements of reducing bias while simultaneously reducing variance. This trade-off highlights the importance of optimizing the complexity of the model in order to achieve the best generalization. However, choosing an optimal number of kernels is beyond the focus of this paper.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing