A Theoretical and Empirical Study of Functional Link Neural Networks (FLANNs) for Classification

A Theoretical and Empirical Study of Functional Link Neural Networks (FLANNs) for Classification

Satchidananda Dehuri, Sung-Bae Cho
DOI: 10.4018/978-1-61520-711-4.ch022
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this chapter, the primary focus is on theoretical and empirical study of functional link neural networks (FLNNs) for classification. We present a hybrid Chebyshev functional link neural network (cFLNN) without hidden layer with evolvable particle swarm optimization (ePSO) for classification. The resulted classifier is then used for assigning proper class label to an unknown sample. The hybrid cFLNN is a type of feed-forward neural networks, which have the ability to transform the non-linear input space into higher dimensional space where linear separability is possible. In particular, the proposed hybrid cFLNN combines the best attribute of evolvable particle swarm optimization (ePSO), back-propagation learning (BP-Learning), and Chebyshev functional link neural networks (CFLNN). We have shown its effectiveness of classifying the unknown pattern using the datasets obtained from UCI repository. The computational results are then compared with other higher order neural networks (HONNs) like functional link neural network with a generic basis functions, Pi-Sigma neural network (PSNN), radial basis function neural network (RBFNN), and ridge polynomial neural network (RPNN).
Chapter Preview
Top

Introduction

In recent years, neural networks, in particular higher order neural networks (Ghosh & Shin, 1992) have been widely used to classify non-linearly separable patterns and can be viewed as a problem of approximating an arbitrary decision boundary. In the sequel it can successfully distinguish the various classes in the feature space. In reality, the boundaries between classes are as a rule nonlinear. It is also known that using a number of hyperplanes can approximate any non-linear surface. Hence the problem of classification can be viewed as approximating the linear surfaces that can appropriately model the class boundaries while providing minimum number of misclassified data points. In other words, a classifier partitions the training candidate space X into class-labeled regions of Ci, i=1,2,3,...,k, where k is the number of classes such that 978-1-61520-711-4.ch022.m01=X, and CjCm= ϕ for j, m=1(1)k and j≠m (if there is no fuzziness). The feature space is denoted as D. If D with four or more dimensions is partitioned linearly, the decision functions are called hyperplanes. Otherwise the decision functions are hypercubes. A general hyperplane can be represented as H(x)=W.XT=w1.x1+w2.x2+....+wD.xD+wD+1, where W=[w1, w2,..., wD, wD+1] and X=[x1, x2,..., xD, 1] are called the weight and augmented feature vectors respectively.

Artificial neural networks have become one of the most acceptable soft computing tools for approximating the decision boundaries of a classification problem (Haykin, 1999; Mangasarian & Wild, 2008). This well-liked behavior stems from a number of reasons, including: their capability to capture nonlinear relationships between input-output of patterns; their biological plausibility, as compared to conventional statistical models (Fukunaga, 1990; Theodorodis & Koutroumbas, 1999); their potential for parallel implementations; their celebrated robustness and graceful degradation, etc. In fact, a multi-layer perceptron (MLP) with a suitable architecture is capable of approximating virtually any function of interest (Hornik, 1991). This does not mean that finding such a network is easy. On the contrary, problems, such as local minima trapping, saturation, weight interference, initial weight dependence, and over-fitting, make neural network training difficult.

An easy way to avoid these problems consists in removing the hidden layers. This may sound a little inconsiderate at first, since it is due to them that nonlinear input output relationships can be captured. Encouragingly enough, the removing procedure can be executed without giving up non-linearity, provided that the input layer is endowed with additional higher order units (Giles & Maxwell, 1987; Pao, 1989). This is the idea behind HONNs (Antyomov & Pecht, 2005) like functional link neural networks (FLNNs) (Misra & Dehuri, 2007), ridge polynomial neural networks (RPNN) (Shin & Ghosh, 1995; Shin & Ghosh, 1992a), recurrent high-order neural networks (RHONNs) (Kosmatopoulos, Polycarpou, Christodoulou & Ioannou, 1995; Kosmatopoulos & Christodoulou, 1997; Rovithakis & Christodoulou, 2000), and so on.

Complete Chapter List

Search this Book:
Reset