DE-Based RBFNs for Classification With Special Attention to Noise Removal and Irrelevant Features

DE-Based RBFNs for Classification With Special Attention to Noise Removal and Irrelevant Features

Ch. Sanjeev Kumar Dash, Ajit Kumar Behera, Sarat Chandra Nayak
DOI: 10.4018/978-1-5225-2857-9.ch011
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter presents a novel approach for classification of dataset by suitably tuning the parameters of radial basis function networks with an additional cost of feature selection. Inputting optimal and relevant set of features to a radial basis function may greatly enhance the network efficiency (in terms of accuracy) at the same time compact its size. In this chapter, the authors use information gain theory (a kind of filter approach) for reducing the features and differential evolution for tuning center and spread of radial basis functions. Different feature selection methods, handling missing values and removal of inconsistency to improve the classification accuracy of the proposed model are emphasized. The proposed approach is validated with a few benchmarking highly skewed and balanced dataset retrieved from University of California, Irvine (UCI) repository. The experimental study is encouraging to pursue further extensive research in highly skewed data.
Chapter Preview
Top

1. Introduction

Classification is one of the fundamental tasks in data mining and pattern recognition. Over the years many models have been proposed. However, it is a consensus that the accuracy of the discovered model (i.e., a neural networks (NNs), rules, decision tree strongly depends on the quality of the data being mined. Hence inconsistency removal and feature selection brings lots of attention of many researchers. If the inconsistent data is simply deleted or classified as a new category then inevitably some useful information will be lost. The method used in this article for making the dataset consistent is based on Bayesian statistical method. Here the inconsistent data is classified as the most probable one and the redundant data records are deleted as well. So the loss of information due to simple deletion or random classification of inconsistent data is reduced and the size of the dataset is also reduced.

Feature selection is the process of selecting a subset of available features to use in empirical modeling. Like feature selection, instance selection is to choose a subset samples to achieve the original purpose of a classification tasks, as if the whole dataset is used. Many variants of evolutionary and non-evolutionary based approaches are discussed in literatures. The ideal outcome of instance selection is a model independent, minimum sample of data that can accomplish tasks with little or no performance deterioration. Unlike feature selection and instance selection, feature extraction at feature level fusion recently attracts data mining/machine learning researchers to give special focus while designing a classifier.

Feature selection can be broadly classified into two categories: i) filter approach (it depends on generic statistical measurement); and ii) wrapper approach (based on the accuracy of a specific classifier). In this work, the feature selection is performed based on information gain theory (entropy) measure with a goal to select a subset of features that preserves as much as possible the relevant information found in the entire set of features. After selection of the relevant set of features the fine tuned radial basis function network is modeled using differential evolution for classification of both balanced and unbalanced datasets. In imbalance classification problems occurs that the number of instances of each class can be very different.

Over the decade radial basis function (RBF) networks have attracted a lot of interest in various domain of interest. One reason is that they form a unifying link between function approximation, regularization, noisy interpolation, classification, and density estimation. It is also the case that training RBF networks is usually faster than training multi-layer perceptron networks. RBF network training usually proceeds in two steps: First, the basis function parameters (corresponding to hidden units) are determined by clustering. Second, the final-layer weights are determined by least square which reduces to solve a simple linear system. Thus, the first stage is an unsupervised method which is relatively fast, and the second stage requires the solution of a linear problem, which is also fast.

One of the advantages of RBF neural networks, compared to multi-layer perceptron networks, is the possibility of choosing suitable parameters for the units of hidden layer without having to perform a nonlinear optimization of the network parameters. However, the problem of selecting the appropriate number of basis functions remains a critical issue for RBF networks. The number of basis functions controls the complexity, and hence the generalization ability of RBF networks. An RBF network with too few basis functions gives poor predictions on new data, i.e., poor generalization, since the model has limited flexibility. On the other hand, an RBF network with too many basis functions also yields poor generalization since it is too flexible and fits the noise in the training data. A small number of basis functions yields a high bias, low variance estimator, whereas a large number of basis functions yields a low bias but high variance estimator. The best generalization performance is obtained via a compromise between the conflicting requirements of reducing bias while simultaneously reducing variance. This trade-off highlights the importance of optimizing the complexity of the model in order to achieve the best generalization. However, choosing an optimal number of kernels is beyond the focus of this article.

Complete Chapter List

Search this Book:
Reset