Optimizing Connection Weights in Neural Networks Using Hybrid Metaheuristics Algorithms

Optimizing Connection Weights in Neural Networks Using Hybrid Metaheuristics Algorithms

Rabab Bousmaha, Reda Mohamed Hamou, Abdelmalek Amine
Copyright: © 2022 |Pages: 21
DOI: 10.4018/IJIRR.289569
Article PDF Download
Open access articles are freely available for download

Abstract

The learning process of artificial neural networks is an important and complex task in the supervised learning field. The main difficulty of training a neural network is the process of fine-tuning the best set of control parameters in terms of weight and bias. This paper presents a new training method based on hybrid particle swarm optimization with Multi-Verse Optimization (PMVO) to train the feedforward neural networks. The hybrid algorithm is utilized to search better in solution space which proves its efficiency in reducing the problems of trapping in local minima. The performance of the proposed approach was compared with five evolutionary techniques and the standard momentum backpropagation and adaptive learning rate. The comparison was benchmarked and evaluated using six bio-medical datasets. The results of the comparative study show that PMVO outperformed other training methods in most datasets and can be an alternative to other training methods.
Article Preview
Top

Introduction

Artificial neural network (ANN) is one of the most important data mining techniques. It has been successfully applied to many domains. The feedforward multilayer perceptron (MLP) is one of the best-known neural networks. The multilayer perceptron (MLP) consists of three layers composed of neurons organized into input, output and hidden layers. The first layer receives the input, the second layer is the hidden layer and the third layer produces the output. The success of an MLP generally depends on the training process that is determined by training algorithms. The objective of the training algorithms is to find the best connection between weights and biases that minimize the classification error.

Training algorithms can be classified into two classes: gradient-based and stochastic search methods. Backpropagation (BP) and its variants are gradient-based methods and considered as one of the most popular techniques used to train the MLP neural network. Gradient-based methods have many drawbacks, such as the slaw convergence, the high dependency on the initial value of weights and biases and the tendency to be trapped in local minima(Zhang, Zhang, Lok, & Lyu,2007).To address these problems, stochastic search methods, such as metaheuristics have been proposed as alternative methods for training feedforward neural network. Metaheuristics have many advantages: they apply to any type of NN with any activation function (Kiranyaz, Ince, Yildirim, & Gabbouj,2009), provide acceptable solutions within a reasonable time to solve complex and difficult problems (Raidl, 2006), and are particularly useful for dealing with large complex problems that generate many local optima (Kenter et al.,2018; Wang, Li, Huang, & Lazebnik,2019).

Metaheuristics can be divided into single and population solution-based algorithms. The population-based algorithms can be divided into groups: swarm intelligence and evolutionary algorithms. For swarm intelligence algorithms, several authors proposed Particle Swarm Optimization as training methods. The biggest challenges at PSO are a poor compromise between exploration and exploitation and limiting the diversity of the population. Some works tried to address them in terms of learning approaches, parameters setting and hybridized methods. For example, several works attempted to fine-tune and modify parameters through Gaussian adaptation, Memory adaptation or Fuzzy-based methods, while others work sought to avoid premature convergence through the use of hybrid methods such as LFPSO (Haklı,& Uğuz,2014) and (LPSONS) (Tarkhaneh,& Shen,2019).

The particle swarm optimization is a technique inspired by birds flocking or fish schooling. In PSO each individual is bird or fish in the search space that has position and velocity. Particles try to keep up with their local best positions before they search for the best global position.

In this paper, we propose a new training algorithm based on hybrid Particle Swarm Optimization (PSO) with Multi-Verse Optimization (MVO) to train MLP neural networks.

Though a wide variety of swarm-based and evolutionary algorithms are investigated and deployed in the MLP training literature. There is a question here as if new training algorithms still need to be developed. The answer is yes, local minimum issues remain available. The no-free-lunch theorem (NFL) states that there is no superior optimization algorithm to solve all optimization problems.

Training MLP is also an optimization problem that varies for each dataset (Faris et al., 2016).

Based on those reasons, this paper presents a new training approach based on particle swarm optimization (PSO) with Multi-Verse Optimization MVO, called PMVO, to train the feedforward neural network (FFNN). The algorithm can avoid local minima, promote global search, create a balance between exploration and exploitation, and improve convergence speed. Nine datasets were solved by the proposed trainer.

Moreover, the application of the trainer was investigated in bio-medical. The performance of PMVO was compared with five well-known trainer metaheuristics algorithms in the literature: PSO (Mendes, Cortez, Rocha, & Neves, 2002), MFO (Yamany, Fawzy, Tharwat, & Hassanien, 2015), MVO (Faris et al., 2016), WOA (Aljerah et al.,2018), HACPSO (Khan et al., 2019) based on Accuracy, Mean Square Error (MSE), F-measure, Specificity, Sensitivity, and Precision. Friedman statistical test shows that the proposed training algorithm outperforms other training algorithms.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024)
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 3 Released, 1 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing