Hybrid Honey Bees Meta-Heuristic for Benchmark Data Classification

Hybrid Honey Bees Meta-Heuristic for Benchmark Data Classification

Habib Shah, Nasser Tairan, Rozaida Ghazali, Ozgur Yeniay, Wali Khan Mashwani
Copyright: © 2019 |Pages: 17
DOI: 10.4018/978-1-5225-5832-3.ch008
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Some bio-inspired methods are cuckoo search, fish schooling, artificial bee colony (ABC) algorithms. Sometimes, these algorithms cannot reach to global optima due to randomization and poor exploration and exploitation process. Here, the global artificial bee colony and Levenberq-Marquardt hybrid called GABC-LM algorithm is proposed. The proposed GABC-LM will use neural network for obtaining the accurate parameters, weights, and bias values for benchmark dataset classification. The performance of GABC-LM is benchmarked against NNs training with the typical LM, PSO, ABC, and GABC methods. The experimental result shows that the proposed GABC-LM performs better than that standard BP, ABC, PSO, and GABC for the classification task.
Chapter Preview
Top

Introduction

Artificial Neural Networks (ANNs) are the most novel and powerful mathematical tools suitable for solving complex linear and nonlinear, engineering and economical problems such as prediction, forecasting and classification (Chakravarty, Dash, Pandi, & Panigrahi, 2011; Ghazali, Jaafar Hussain, Mohd Nawi, & Mohamad, 2009). NNs are being used extensively for solving universal problems intelligently like continuous, discrete, telecommunications fraud detection and clustering (Charalampidis & Muldrey, 2009; Hilas & Mastorocostas, 2008; Rao, Satchidananda, & Rajib, 2012). NNs are being applied for different optimization and mathematical problems such as classification, object and image recognition, signal processing, temperature and weather forecasting and bankruptcy (Aussem, Murtagh, & Sarazin, 1994; Bakhta & Ghalem, 2014; Behnam & Isa, 2013; Ch. Sanjeev Kumar, Ajit Kumar, Satchidananda, & Sung-Bae, 2013; Chen, Duan, Cai, & Liu, 2011).

The main task of BP algorithm is to update the network weights for minimizing output error using BP processing because the accuracy of any approximation depends upon the selection of proper weights for the neural networks (NNs)(Ramakanta, Ravi, & Patra, 2010). It has high success rate in solving many complex problems, but it still has some drawbacks, especially when setting parameter values like initial values of connection weights, value for learning rate, and momentum. If the network topology is not carefully selected, the NNs algorithm can get trapped in local minima, or it might lead to slow convergence or even network failure. In order to overcome the disadvantages of standard BP, much global optimization population-based technique, GA (Geretti & Abramo, 2011), improved BP (Nawi, Ransing, & Ransing, 2006), DE (Slowik & Bialko, 2008), BP-ant colony(Chengzhi, Yifan, Lichao, & Yang, 2008), and PSO (Hongwen & Rui, 2006).

Dynamic Swarm ABC algorithm, Typical ABC, Differential Operators Embedded ABC, (Harish, Jagdish Chand, Arya, & Kusum, 2012; D. Karaboga & Akay, 2007; Tarun Kumar & Millie, 2011), he Hybrid Ant Bee Colony (HABC), Genetic Algorithm and Back Propagation Neural Network, Improved Artificial Bee Colony (IABC), G-HABC (Suruchi, 2016) and the Global Hybrid Ant Bee Colony (HABC) algorithm, are population-based algorithms that can provide the best possible solutions for different mathematical problems by using inspiration techniques from nature. A common feature of population-based algorithms is that the population consisting of feasible solutions to the difficulty is customized by applying some agents on the solutions depending upon information of their robustness. Therefore, the population is encouraged towards improved solution areas of the solution space. Population-based optimization algorithms are categories into two sections namely evolutionary algorithm (EA) (Xinyan & Jianguo, 2011), and I-based algorithm (Aydin, Wu, & Liang, 2010) . In EA, the major plan underlying this combination is to take the weight matrices of the ANNs as individuals, to change the weights by some operations such as crossover and mutation, and to use the error produced by the NNs as the fitness measure that guides selection (Yan-fei & Xiong-min, 2010). In S based algorithm, ABC has the advantage of global optimization and easy recognition. It has been successfully used in solving combinatorial optimization problems such as clustering and MLP training for XOR problems (Davidović, Šelmić, Teodorović, & Ramljak; Dervis Karaboga, Akay, & Ozturk, 2007; Habib Shah et al., 2017). ABC algorithm is an easily understandable technique for training MLP on classification problems. This algorithm uses randomly selected natural techniques with a colony to train NNs by optimal weights.

Complete Chapter List

Search this Book:
Reset