Article Preview
Top1. Introduction
Nowadays, Neural Networks is widely used in different works such as: linear and nonlinear modeling, prediction and forecasting are mostly caused by their property of generality (Ghazali, Hussain, & Liatsis, 2011; Husaini et al., 2011; Ghazali et al., 2008; Osamu, 1998; Yan & Saif, 1993). It has powerful and flexible tools that were used successfully in various applications such as classification, statistical, biological, medical, industrial, mathematical, and software engineering (Curry & Rumelhart, 1990; Fionn, 1991; Thwin & Quah, 2005). Artificial Neural Networks learnt their training techniques by parallel processing. NNs tools are capable of achieving many scientific research applications by providing best network architecture, activation function, input pre-processing and optimal weight values.
NNs tools are the most interesting and understandable to mathematical problems andstatistical modeling by using distinct background of varied data types. The accuracy makes this particular use of NNs as attractive to scientist analysts in various areas a different task as image processing, scheduling, online identification and approximation algorithm for machine scheduling (Glover & Laguna, 1989; Abido & Abdel-Magid, 1997; Kacem & Haouari, 2009).
Many training techniques with different architectures used for parity problem and other boolean function classification (Iyoda, Nobuhara, & Hirota, 2003; Stork & Allen, 1992). These techniques are suitable for the parity problem and can’t cover other complex problems. Biological NNs can solve complex learning problems inherent in the optimization of intelligent actions. Finding general algorithm solving a larger set of problems of similar complexity such as the XOR, Encoder Decoder and parity problems are still a challenge to the scientists.
Traditionally, NN models are learnt by changing the interconnection weights of their associated neurons. Back propagation, Evolutionary Algorithm (EA), Swarm Intelligence (SI), Differential Evolution (DE), Hybrid Bee Ant Colony (HBAC), IABC-MLP, Reinforcement learning, and recently HABC algorithm is used for training multilayer perceptron (Ilonen, Kamarainen, & Lampinen, 2003; Kiranyaz, Ince, Yildirim, & Gabbouj, 2009; Yin, Bhanu, Chang, & Dong, 2003; Pillai & Sheppard, 2011). However, a BP learning algorithm has some difficulties; especially, it’s getting trapped in local minima, where it can affect the NN performance (Gori & Tesi, 1992).
To overcome the gap of standard back-propagation, many approaches are used, based on mathematical approach, local and global optimization and population techniques. These are: Particle Swarm Optimization (PSO), ACO, (ABC-LM), ABC-MLP, HABC, HBAC recently population-based and Evolutionary algorithm having trustful performance (Blum & Socha, 2005; Imran, Manzoor, Ali, & Abbas, 2011; Peng, Wenming, & Jian, 2011; Zhang, Zhang, Lok, & Lyu, 2007).
In this study, the new hybrid population-based algorithm named, Global Hybrid Ant Bee Colony (G-HABC) is used for recovering the BP crack. The experiment test done by Boolean function and the result compared with ABC and LM algorithm.