Article Preview
TopIntroduction
Radial basis function networks (RBFNs) (Powell, 1985; Broomhead et al., 1988; Buhmann, 2010) have been studied in many disciplines like pattern recognition (Theodoridis et al., 2006), medicine (Subashini et al., 2008), multi-media applications (Dhanalakshmi et al., 2009), computational finance (Sheta et al., 2001), software engineering (Idri et al., 2010), etc. It is emerged as a variant in late 1980’s, however its root entrenched in much older pattern recognition, numerical analysis, and other related fields (Park et al., 1991). Radial basis function networks have attracted the attention of many researchers because of its: (i) universal approximation (Park et al., 1991), (ii) compact topology (Lee et al., 1991), and (iii) faster learning speed (Moody et al., 1989).
In the context of universal approximation, it has been proved that “a radial basis function networks can approximate arbitrarily well any multivariate continuous function on a compact domain if a sufficient number of radial basis function units are given” (Zheng et al., 1996). Note, however that the number of kernels (k) chosen, need not equal to the number of training patterns (n). In general, it is better to have k much less than n i.e., k << n. Besides the gain in computational complexity, the reduction in the number of kernels is beneficial for the generalization capability of the resulting model.
In RBFNs, other extensions are possible e.g., adapting centers, weighted norms, devising learning rules, network with novel and different types of basis functions and multiple scales. A variety of learning procedure for RBFNs has been developed (Moody et al., 1989; Chen et al., 1991; Zhao et al., 2002). It is normally divided into two phases: (1) the adjustment of the connection weight vector; and (2) the modification of parameter of RBF units such as center and spreads (Uykan et al., 1997; Gomm, 2000).