G-HABC Algorithm for Training Artificial Neural Networks

G-HABC Algorithm for Training Artificial Neural Networks

Habib Shah, Rozaida Ghazali, Nazri Mohd Nawi, Mustafa Mat Deris
Copyright: © 2012 |Pages: 19
DOI: 10.4018/jamc.2012070101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Learning problems for Neural Network (NN) has widely been explored in the past two decades. Researchers have focused more on population-based algorithms because of its natural behavior processing. The population-based algorithms are Ant Colony Optimization (ACO), Artificial Bee Colony (ABC), and recently Hybrid Ant Bee Colony (HABC) algorithm produced an easy way for NN training. These social based techniques are mostly used for finding best weight values and over trapping local minima in NN learning. Typically, NN trained by traditional approach, namely the Backpropagation (BP) algorithm, has difficulties such as trapping in local minima and slow convergence. The new method named Global Hybrid Ant Bee Colony (G-HABC) algorithm which can overcome the gaps in BP is used to train the NN for Boolean Function classification task. The simulation results of the NN when trained with the proposed hybrid method were compared with that of Levenberg-Marquardt (LM) and ordinary ABC. From the results, the proposed G-HABC algorithm has shown to provide a better learning performance for NNs with reduced CPU time and higher success rates.
Article Preview
Top

1. Introduction

Nowadays, Neural Networks is widely used in different works such as: linear and nonlinear modeling, prediction and forecasting are mostly caused by their property of generality (Ghazali, Hussain, & Liatsis, 2011; Husaini et al., 2011; Ghazali et al., 2008; Osamu, 1998; Yan & Saif, 1993). It has powerful and flexible tools that were used successfully in various applications such as classification, statistical, biological, medical, industrial, mathematical, and software engineering (Curry & Rumelhart, 1990; Fionn, 1991; Thwin & Quah, 2005). Artificial Neural Networks learnt their training techniques by parallel processing. NNs tools are capable of achieving many scientific research applications by providing best network architecture, activation function, input pre-processing and optimal weight values.

NNs tools are the most interesting and understandable to mathematical problems andstatistical modeling by using distinct background of varied data types. The accuracy makes this particular use of NNs as attractive to scientist analysts in various areas a different task as image processing, scheduling, online identification and approximation algorithm for machine scheduling (Glover & Laguna, 1989; Abido & Abdel-Magid, 1997; Kacem & Haouari, 2009).

Many training techniques with different architectures used for parity problem and other boolean function classification (Iyoda, Nobuhara, & Hirota, 2003; Stork & Allen, 1992). These techniques are suitable for the parity problem and can’t cover other complex problems. Biological NNs can solve complex learning problems inherent in the optimization of intelligent actions. Finding general algorithm solving a larger set of problems of similar complexity such as the XOR, Encoder Decoder and parity problems are still a challenge to the scientists.

Traditionally, NN models are learnt by changing the interconnection weights of their associated neurons. Back propagation, Evolutionary Algorithm (EA), Swarm Intelligence (SI), Differential Evolution (DE), Hybrid Bee Ant Colony (HBAC), IABC-MLP, Reinforcement learning, and recently HABC algorithm is used for training multilayer perceptron (Ilonen, Kamarainen, & Lampinen, 2003; Kiranyaz, Ince, Yildirim, & Gabbouj, 2009; Yin, Bhanu, Chang, & Dong, 2003; Pillai & Sheppard, 2011). However, a BP learning algorithm has some difficulties; especially, it’s getting trapped in local minima, where it can affect the NN performance (Gori & Tesi, 1992).

To overcome the gap of standard back-propagation, many approaches are used, based on mathematical approach, local and global optimization and population techniques. These are: Particle Swarm Optimization (PSO), ACO, (ABC-LM), ABC-MLP, HABC, HBAC recently population-based and Evolutionary algorithm having trustful performance (Blum & Socha, 2005; Imran, Manzoor, Ali, & Abbas, 2011; Peng, Wenming, & Jian, 2011; Zhang, Zhang, Lok, & Lyu, 2007).

In this study, the new hybrid population-based algorithm named, Global Hybrid Ant Bee Colony (G-HABC) is used for recovering the BP crack. The experiment test done by Boolean function and the result compared with ABC and LM algorithm.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing