A Hybrid Approach Based on Genetic Algorithm and Particle Swarm Optimization to Improve Neural Network Classification

A Hybrid Approach Based on Genetic Algorithm and Particle Swarm Optimization to Improve Neural Network Classification

Nabil M. Hewahi (Department of Computer Science, University of Bahrain, Alsakheer, Bahrain) and Enas Abu Hamra (Islamic University of Gaza, Gaza Strip, Palestine)
Copyright: © 2017 |Pages: 21
DOI: 10.4018/JITR.2017070104
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Artificial Neural Network (ANN) has played a significant role in many areas because of its ability to solve many complex problems that mathematical methods failed to solve. However, it has some shortcomings that lead it to stop working in some cases or decrease the result accuracy. In this research the authors propose a new approach combining particle swarm optimization algorithm (PSO) and genetic algorithm (GA), to increase the classification accuracy of ANN. The proposed approach utilizes the advantages of both PSO and GA to overcome the local minima problem of ANN, which prevents ANN from improving the classification accuracy. The algorithms start with using backpropagation algorithm, then it keeps repeating applying GA followed by PSO until the optimum classification is reached. The proposed approach is domain independent and has been evaluated by applying it using nine datasets with various domains and characteristics. A comparative study has been performed between the authors' proposed approach and other previous approaches, the results show the superiority of our approach.
Article Preview

2. Neural Network

Artificial neural network is an important part of artificial intelligence (Wang & Li, 2010). In addition, it is a mathematical model that mimics the structure and function of a human brain (Chogumaira & Hiyama, 2009; Jia & Zhu, 2009). The human brain consists of billions of neurons that are connected, communicating with each other by the use of electrical signals (Haron et al., 2012). ANN, like a human brain, consists of simple processing units, which are called neurons, organized in layers and connected to each other through connection weights and threshold value for information transmission and processing. The weights and thresholds are adjusted automatically in the learning process (Zhang et al., 2009; Shenglong & Tonghui, 2012). The concept of learning from inputs to outputs in ANN is similar to the way that the human brain learns from experience (Miao et al., 2010). There are many different types of ANN structures. One common structure is the Multi-Layer Perceptron (MLP), which is a feed forward type of the neural network. The typical MLP network model consists of a group of neurons with three categories, which are input neurons, hidden neurons, and output neurons. Each neuron is in one layer and is connected with all neurons of the adjusted layer. The operation of a typical MLP network can be divided into two phases, which are training and testing phases. The MLP network must be trained for its specific purpose using learning algorithms like backpropagation algorithm. After the step of training, the MLP network can be used to generate the outputs (Al-Shareef & Abbod, 2010).

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 10: 4 Issues (2017)
Volume 9: 4 Issues (2016)
Volume 8: 4 Issues (2015)
Volume 7: 4 Issues (2014)
Volume 6: 4 Issues (2013)
Volume 5: 4 Issues (2012)
Volume 4: 4 Issues (2011)
Volume 3: 4 Issues (2010)
Volume 2: 4 Issues (2009)
Volume 1: 4 Issues (2008)
View Complete Journal Contents Listing