Differential Evolution-Based Optimization of Kernel Parameters in Radial Basis Function Networks for Classification

Differential Evolution-Based Optimization of Kernel Parameters in Radial Basis Function Networks for Classification

Ch. Sanjeev Kumar Dash, Ajit Kumar Behera, Satchidananda Dehuri, Sung-Bae Cho
Copyright: © 2013 |Pages: 25
DOI: 10.4018/jaec.2013010104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this paper a two phases learning algorithm with a modified kernel for radial basis function neural networks is proposed for classification. In phase one a new meta-heuristic approach differential evolution is used to reveal the parameters of the modified kernel. The second phase focuses on optimization of weights for learning the networks. Further, a predefined set of basis functions is taken for empirical analysis of which basis function is better for which kind of domain. The simulation result shows that the proposed learning mechanism is evidently producing better classification accuracy vis-à-vis radial basis function neural networks (RBFNs) and genetic algorithm-radial basis function (GA-RBF) neural networks.
Article Preview
Top

Introduction

Radial basis function networks (RBFNs) (Powell, 1985; Broomhead et al., 1988; Buhmann, 2010) have been studied in many disciplines like pattern recognition (Theodoridis et al., 2006), medicine (Subashini et al., 2008), multi-media applications (Dhanalakshmi et al., 2009), computational finance (Sheta et al., 2001), software engineering (Idri et al., 2010), etc. It is emerged as a variant in late 1980’s, however its root entrenched in much older pattern recognition, numerical analysis, and other related fields (Park et al., 1991). Radial basis function networks have attracted the attention of many researchers because of its: (i) universal approximation (Park et al., 1991), (ii) compact topology (Lee et al., 1991), and (iii) faster learning speed (Moody et al., 1989).

In the context of universal approximation, it has been proved that “a radial basis function networks can approximate arbitrarily well any multivariate continuous function on a compact domain if a sufficient number of radial basis function units are given” (Zheng et al., 1996). Note, however that the number of kernels (k) chosen, need not equal to the number of training patterns (n). In general, it is better to have k much less than n i.e., k << n. Besides the gain in computational complexity, the reduction in the number of kernels is beneficial for the generalization capability of the resulting model.

In RBFNs, other extensions are possible e.g., adapting centers, weighted norms, devising learning rules, network with novel and different types of basis functions and multiple scales. A variety of learning procedure for RBFNs has been developed (Moody et al., 1989; Chen et al., 1991; Zhao et al., 2002). It is normally divided into two phases: (1) the adjustment of the connection weight vector; and (2) the modification of parameter of RBF units such as center and spreads (Uykan et al., 1997; Gomm, 2000).

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 13: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing