Global Artificial Bee Colony-Levenberq-Marquardt (GABC-LM) Algorithm for Classification

Global Artificial Bee Colony-Levenberq-Marquardt (GABC-LM) Algorithm for Classification

Habib Shah (Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia (UTHM), Parit Raja, Johor, Malaysia), Rozaida Ghazali (Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia (UTHM), Parit Raja, Johor, Malaysia), Nazri Mohd Nawi (Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia (UTHM), Parit Raja, Johor, Malaysia), Mustafa Mat Deris (Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia (UTHM), Parit Raja, Johor, Malaysia) and Tutut Herawan (Department of Mathematics Education, Universitas Ahmad Dahlan, Yogyakarta, Indonesia)
Copyright: © 2013 |Pages: 17
DOI: 10.4018/jaec.2013070106
OnDemand PDF Download:
No Current Special Offers


The performance of Neural Networks (NN) depends on network structure, activation function and suitable weight values. For finding optimal weight values, freshly, computer scientists show the interest in the study of social insect’s behavior learning algorithms. Chief among these are, Ant Colony Optimzation (ACO), Artificial Bee Colony (ABC) algorithm, Hybrid Ant Bee Colony (HABC) algorithm and Global Artificial Bee Colony Algorithm train Multilayer Perceptron (MLP). This paper investigates the new hybrid technique called Global Artificial Bee Colony-Levenberq-Marquardt (GABC-LM) algorithm. One of the crucial problems with the BP algorithm is that it can sometimes yield the networks with suboptimal weights because of the presence of many local optima in the solution space. To overcome GABC-LM algorithm used in this work to train MLP for the boolean function classification task, the performance of GABC-LM is benchmarked against MLP training with the typical LM, PSO, ABC and GABC. The experimental result shows that GABC-LM performs better than that standard BP, ABC, PSO and GABC for the classification task.
Article Preview


Artificial Neural Networks (ANNs) are the most novel and powerful artificial tool suitable for solving combinatorial problems such as prediction, forecasting and classification (Daqi & Yan, 2005; de A. Araújo, 2011; Ghazali, Jaafar Hussain, Mohd Nawi, & Mohamad, 2009). NNs are being used extensively for solving universal problems intelligently like continuous, discrete, telecommunications fraud detection and clustering (Charalampidis & Muldrey, 2009; Hilas & Mastorocostas, 2008; Mielniczuk & Tyrcha, 1993). Ns are being applied for different optimization and mathematical problems such as classification, object and image recognition, signal processing, temperature and weather forecasting and bankruptcy (Aussem, Murtagh, & Sarazin, 1994; Chaudhuri & Bhattacharya, 2000; Chen, Duan, Cai, & Liu, 2011; Uncini, 2003).

There are several techniques used for NNs favourable performance for training ANN such as evolutionary algorithms (EA), Multi-objective hybrid evolutionary algorithms (Qasem, Shamsuddin, & Zain, 2012; Tallón-Ballesteros & Hervás-Martínez, 2011), Genetic algorithms (GA) (Blanco, Delgado, & Pegalajar, 2001; García-Pedrajas, Ortiz-Boyer, & Hervás-Martínez, 2006; Geretti & Abramo, 2011), Particle swarm optimization (PSO) (Al-Shareef & Abbod, 2010; Hong-Bo, Yi-Yuan, Jun, & Ye, 2004; Hongwen & Rui, 2006), deferential evolution (DE) (Slowik & Bialko, 2008; Subudhi & Jena, 2011), an colony optimization (ACO) (Ashena & Moghadasi, 2011; Blum & Socha, 2005), BP d improved BP algorithm (Nawi, Ransing, & Ransing, 2006; Yan, Zhongjun, & Jiayu, 2010). These techniques are used for initialization of optimum weights, parameters, activation function, and selection of a proper network structure.

The main task of BP algorithm is to update the network weights for minimizing output error using BP processing because the accuracy of any approximation depends upon the selection of proper weights for the neural networks (NNs). It has high success rate in solving many complex problems, but it still has some drawbacks, especially when setting parameter values like initial values of connection weights, value for learning rate, and momentum. If the network topology is not carefully selected, the NNs algorithm can get trapped in local minima, or it might lead to slow convergence or even network failure. In order to overcome the disadvantages of standard BP, much global optimization population-based technique[20], GA (Geretti & Abramo, 2011), improved BP (Nawi, et al., 2006), DE (Slowik & Bialko, 2008), BP-ant colony(Chengzhi, Yifan, Lichao, & Yang, 2008), and PSO (Hongwen & Rui, 2006).

Complete Article List

Search this Journal:
Open Access Articles: Forthcoming
Volume 13: 4 Issues (2022): Forthcoming, Available for Pre-Order
Volume 12: 4 Issues (2021): 3 Released, 1 Forthcoming
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing