An Improved PSO with Small-World Topology and Comprehensive Learning

An Improved PSO with Small-World Topology and Comprehensive Learning

Yanmin Liu, Ben Niu
Copyright: © 2014 |Pages: 16
DOI: 10.4018/ijsir.2014040102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Particle swarm optimization (PSO) is a heuristic global optimization method based on swarm intelligence, and has been proven to be a powerful competitor to other intelligent algorithms. However, PSO may easily get trapped in a local optimum when solving complex multimodal problems. To improve PSO's performance, in this paper the authors propose an improved PSO based on small world network and comprehensive learning strategy (SCPSO for short), in which the learning exemplar of each particle includes three parts: the global best particle (gbest), personal best particle (pbest), and the pbest of its neighborhood. Additionally, a random position around a particle is used to increase its probability to jump to a promising region. These strategies enable the diversity of the swarm to discourage premature convergence. By testing on five benchmark functions, SCPSO is proved to have better performance than PSO and its variants. SCPSO is then used to determine the optimal parameters involved in the Van-Genuchten model. The experimental results demonstrate the good performance of SCPSO compared with other methods.
Article Preview
Top

1. Introduction

Particle swarm optimization (PSO) algorithm is one of evolutionary algorithms (EAs). It was first proposed by Kenney and Eberhart based on the metaphor of social behavior of birds flocking and fish schooling (Kennedy & Eberhart, 1995). It is easy to implement PSO to solve optimization problems, but when solving multimodal problems, it may be easily trapped into a local minimum. Furthermore, most real-world optimization problems are multimodal problems.

In the population of a PSO, each particle searches for a better position according to its previous best success and the success of some other particles with one type of population topology which impacts the PSO’s performance (Kennedy, 2002). Therefore, researches were launched about population topology. For example, Clerc indicated that a constriction factor may help to ensure the convergence (1999). Mendes and Kennedy introduced a fully informed PSO to update the particle velocity where all the neighbors of the particle are used to update the velocity (Mendes & Kennedy, 2004). Peram proposed the fitness-distance-ratio-based PSO (FDR-PSO) with near neighbor interactions (Peram, 2003). When updating each dimension of the velocity for a particle, the FDR-PSO algorithm selects a particle, which has a higher fitness value and is nearer to the particle being updated. Liang proposed an improved PSO called CLPSO, which uses a novel learning strategy (Liang, 2006). Liu and Zhao proposed an improved PSO based on dynamic neighborhood to improve particles’ ability to escape from local optima (Liu & Zhao, 2013). Altogether, the above improved PSOs have achieved satisfactory results, but with regards to convergence and accuracy, there are shortages, therefore, there is still space to improve. Additionally, Jiang proposed a novel age-based particle swarm optimization with age-group topology, where the swarm is separated by different age-groups’ ages, and an age group based parameter setting method was devised (Jiang, 2013). Lim proposed a new variant of particle swarm optimization with increasing topology connectivity that increases the particle’s topology connectivity with time as well as performs the shuffling mechanism (Lim, 2013). A particle swarm optimizer was developed, which reduces the probability of premature convergence to local optima in the PSO by exploiting the particle’s local social learning based on the idea of cyclic-network topology (Maruta, 2013). Zhang proposed an improved PSO to solve bilevel multi-objective programming problem in which, the proposed algorithm directly simulates the decision process of bilevel programming by a global topology (Zhang, 2012). A modified hybrid Nelder-Mead simplex search and PSO was proposed for solving parameter estimation problems in which PSO adopted a special topology to improve the efficiency of hybrid algorithm to solve engineering optimization problems (Zhang, 2012; Liu, 2012). A variant of particle swarm optimizer based on the simulation of the human social communication behavior topology is presented in which each particle initially joins a default number of social circles and its learning exemplars include three parts to improve the algorithm’s performance (Liu, 2012). Ghosh proposed a novel optimization technique hybridizing the concepts of genetic algorithm and Lbest particle swarm optimization in which a new topology, namely `dynamically varying sub-swarm', was incorporated in the search process and some selected crossover and mutation techniques were used for generation updating (Ghosh, 2012).

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 3 Issues (2023)
Volume 13: 4 Issues (2022)
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing