Article Preview
TopIntroduction
Classification is one of the most frequent studies in the area of Artificial Neural Networks (ANNs) and mostly involved in decision making task of human activity. The recent vast research activities in neural classification have established that ANNs are undeniably a promising tool and have been widely applied to various real world classification task includes bankruptcy prediction, handwriting recognition, speech recognition, application usage patterns, fault detection and medical diagnosis (Banu & Nagaveni, 2012; Chen, 2010; Kumar and Rath, 2016; Grace & Williams, 2016; Anitha & Acharjya, 2016). One of the best-known types of Neural Networks is the Multilayer Perceptron (MLP). The MLP structure consists of multiple layers of nodes which give the network the ability to solve problems that are not linearly separable. However, MLP usually requires a fairly-large amount of available measures in order to achieve good classification ability. When the number of input to the model and the number of hidden nodes become large, the MLP architecture will become complex and thus resulting in slower operation. Furthermore, difficulties in fixing appropriate number of neurons in a layer and number of hidden layers in a network has also make MLP architecture not that easy to train. An alternative approach of avoiding this problem is by removing the hidden layers from the architecture which prompted to an alternative network architecture named Functional Link Neural Network (FLNN) (Pao & Takefuji, 1992). The FLNN is a flat network (without hidden layers) and with this type topology it reduced the neural architectural complexity while at the same time possesses the ability to provide a nonlinear decision boundary for solving non-linear separable classification tasks.
The FLNN network is usually trained by adjusting the weight of connection between neurons. The standard method for tuning the weight in FLNN is using a Backpropagation (BP) learning algorithm. The BP-learning algorithm developed by Rumelhart et al. (Rumehart, 1986), is the most well-known and widely used for training a Neural Networks. The idea of BP-learning algorithm is to reduce network error, until the networks learned the training data. During the FLNN-BP training phase, the feedforward calculation is combined with backward error propagation for the purpose of adjusting the connection weights. However, one of the crucial problems with the standard BP-learning algorithm is that it can easily get trapped in local minima especially for those non-linearly separable classification problems (Dehuri & Cho, 2010a). The performance of the network learning also strictly dependent on the shape of the error surface, values of the initial connection weights, parameters such as learning rate and momentum (Liu et al., 2004). To improve this, the Artificial Bee Colony (ABC) optimization algorithm is be used to optimize the weights of FLNN instead of the BP-learning algorithm (Hassim & Ghazali, 2013). The ABC algorithm simulates the intelligent foraging behavior of a honey bee swarm was proposed by Karaboga (Karaboga, 2005) for solving numerical optimization problem. Training a connection weight in neural network is considered as an optimization task (Karaboga et al., 2007) since it is desired to find the optimal weight set of a neural network in training process. Training the FLNN using ABC algorithm (FLNN-ABC) as learning scheme on a Boolean function classification (Hassim & Ghazali, 2013) and 2-class classification datasets (Hassim & Ghazali, 2012) had shown to have good exploration and exploitation capabilities in searching optimal weight with better accuracy results. However the random foraging behaviour of employed bees’ in exploiting FLNN weight parameters has resulted in poor classification accuracy when applying on multiclass classification problems.