A Dynamic Multi-Swarm Particle Swarm Optimization With Global Detection Mechanism

A Dynamic Multi-Swarm Particle Swarm Optimization With Global Detection Mechanism

Bo Wei, Yichao Tang, Xiao Jin, Mingfeng Jiang, Zuohua Ding, Yanrong Huang
DOI: 10.4018/IJCINI.294566
Article PDF Download
Open access articles are freely available for download

Abstract

To overcome the shortcomings of the standard particle swarm optimization algorithm (PSO), such as premature convergence and low precision, a dynamic multi-swarm PSO with global detection mechanism (DMS-PSO-GD) is proposed. In DMS-PSO-GD, the whole population is divided into two kinds of sub-swarms: several same-sized dynamic sub-swarms and a global sub-swarm. The dynamic sub-swarms achieve information interaction and sharing among themselves through the randomly regrouping strategy. The global sub-swarm evolves independently and learns from the optimal individuals of the dynamic sub-swarm with dominant characteristics. During the evolution process of the population, the variances and average fitness values of dynamic sub-swarms are used for measuring the distribution of the particles, by which the dominant one and the optimal individual can be detected easily. The comparison results among DMS-PSO-GD and other 5 well-known algorithms suggest that it demonstrates superior performance for solving different types of functions.
Article Preview
Top

Introduction

Particle Swarm Optimization (PSO) is a swarm intelligence optimization algorithm (Kennedy, Eberhart, 1995). The PSO originated from the imitating research on the foraging behavior of birds, and then its rules were summarized and applied to solving the optimization problems. In PSO, the birds in the real world are regarded as particles without volume. Each particle of a swarm represents a candidate solution of the optimization problem. The intelligence of PSO is achieved by the simple updating scheme of particles and the mode of information interaction between themselves. For its simple theory, easy implementation, and few parameters required, the PSO has received widespread attention and applied in various practical application fields since it was proposed (Li, Feng, Chen, et al, 2020; Li, Wang, Li, 2019; Wang, Zhua, Li, et al, 2020).

However, the standard PSO has some shortcomings, such as poor diversity of the swarm, premature convergence, and easy to fall into local extremes, which greatly limit its development. In order to overcome these disadvantages, scholars have carried out a lot of research on the idea of multi-swarm. Niu et al. (Niu, Zhu, He, et al, 2007) proposed a multi-swarm cooperative PSO (Multi-Swarm Cooperative PSO, MCPSO). MCPSO divides the population into one master population and many subordinate populations. The particles of the master population update their trajectories according to its own experience and the best particle’ information of the subordinate populations. Cheung et al. (Cheung, Ding, Shen, et al, 2014) proposed an improved PSO (OptiFel with a Heterogeneous Multi-Swarm PSO, MsPSO) based on multi-swarm and heterogeneous search strategy. In MsPSO, a population is divided into four sub-swarms. Two of four sub-swarms are used for local search, while the other two are used for global search and population adaptive adjustment. Based on MsPSO, Ye et al. (Ye, Feng, Fan, et al, 2017) proposed a PSO (Multi-Swarm PSO with Dynamic Learning Strategy, PSO-DLS) based on dynamic learning strategy. In PSO-DLS, the particles of each sub-swarm are marked as ordinary particles or communication particles, which focus on the improvement of the global search ability and the local search ability of the population respectively.

However, none of the algorithms mentioned above take into account the information interaction between sub-swarms, and then it is difficult to utilize superior information effectively. In response to this problem, Liang et al. (Liang, Suganthan, 2005) proposed a dynamic multi-swarm PSO (Dynamic Multi-Swarm PSO, DMS-PSO). In DMS-PSO, the population is divided into multiple sub-swarms with equal size, which will be regrouped after a finite number of iterations. In this way, the information of the dominant individuals can be transferred among different sub-swarms, which can avoid the shortcomings of premature convergence of the population and improve the global search ability of the algorithm effectively. However, this strategy has led to another problem: the way that too many reorganization operations between sub-swarms will decline the local search ability of the algorithm. Then, how to enhance the local search ability of the algorithm while ensuring the effective utilization of superior information becomes the focus of research.

Complete Article List

Search this Journal:
Reset
Volume 18: 1 Issue (2024)
Volume 17: 1 Issue (2023)
Volume 16: 1 Issue (2022)
Volume 15: 4 Issues (2021)
Volume 14: 4 Issues (2020)
Volume 13: 4 Issues (2019)
Volume 12: 4 Issues (2018)
Volume 11: 4 Issues (2017)
Volume 10: 4 Issues (2016)
Volume 9: 4 Issues (2015)
Volume 8: 4 Issues (2014)
Volume 7: 4 Issues (2013)
Volume 6: 4 Issues (2012)
Volume 5: 4 Issues (2011)
Volume 4: 4 Issues (2010)
Volume 3: 4 Issues (2009)
Volume 2: 4 Issues (2008)
Volume 1: 4 Issues (2007)
View Complete Journal Contents Listing