Cooperative Asynchronous Parallel Particle Swarm Optimization for Large Dimensional Problems

Cooperative Asynchronous Parallel Particle Swarm Optimization for Large Dimensional Problems

Farid Bourennani (University of Jeddah, Jeddah, Saudi Arabia; University of Ontario Institute of Technology Oshawa, Canada)
Copyright: © 2019 |Pages: 20
DOI: 10.4018/IJAMC.2019070102
OnDemand PDF Download:
No Current Special Offers


Metaheuristics have been very successful to solve NP-hard optimization problems. However, some problems such as big optimization problems are too expensive to be solved using classical computing. Naturally, the increasing availability of high performance computing (HPC) is an appropriate alternative to solve such complex problems. In addition, the use of HPC can lead to more accurate metaheuristics if their internal mechanisms are enhanced. Particle swarm optimization (PSO) is one of the most know metaheuristics and yet does not have many parallel versions of PSO which take advantage of HPC via algorithmic modifications. Therefore, in this article, the authors propose a cooperative asynchronous parallel PSO algorithm (CAPPSO) with a new velocity calculation that utilizes a cooperative model of sub-swarms. The asynchronous communication among the sub-swarms makes CAPPSO faster than a parallel and more accurate than the master-slave PSO (MS-PSO) when the tested big problems.
Article Preview

1. Introduction

Many real-life optimization problems in science, engineering, economics, and business among others are complex and difficult to solve by exact optimization methods because of partial or complete absence of ideal mathematical properties in the problem such as continuity, differentiability, convexity, and unimodality (Boussaïd et al., 2013). Therefore, the utilization of bio-inspired methods such as metaheuristics are the main alternatives to solve such complex problems (Osman & Kelly, 2012) (Zhou et al., 2013). Metaheuristics are nature-inspired computational methods that optimize a problem by iteratively trying to improve a candidate solution(s); they do not guarantee optimality, but they do provide optimal or close to optimal solutions (Dreo et al., 2006). Metaheuristics have received a high popularity in the past 20 years because of the strong ability solve complex problems characterized by non-convexity, large dimensionality, multi-modality, and hard constraints among others (Alba, 2005). They have been used in a large number of applications ranging from software engineering, energy systems design, bioinformatics, telecommunication, finance and others (Gabriel & Alba, 2011). A description of most known metaheuristics can be found in (Talbi, 2009).

Metaheuristics are of two kinds: trajectory-based metaheuristics and population-based metaheuristics (Blum & Roli, 2003). They differ by the number of tentative solutions generated by a metaheuristic at each iteration. A trajectory-based method begins with a single initial solution that is replaced by another better solution iteratively until the metaheuristic stops. Usually, such methods are characterized by strong exploitation properties as they find quickly a local optimal solution. A population-based metaheuristic begins with an entire population of solutions generated randomly that are enhanced throughout the iterative process. At every iteration, the entire population, or a portion of it, is replaced by new generated solutions of a better quality. Population-based metaheuristics are called exploration-oriented methods because they possess strong diversification properties in the search space but they require more computation as the population size requires a higher number of attempts to solve the problem.

The fast hardware development encouraged various research communities to develop simulation problems that are computationally expensive either due to expensive models such as in mechanics or/and big with large dimensionality (Xiaodong et al., 2013). For such optimization problems, usually the fitness evaluation of a function is computationally the most expensive part of an optimization algorithm (Alba et al., 2013). Therefore, high performance computing (HPC) arises naturally when dealing with population-based metaheuristics because every individual is independent and can be processed independently on a processor. This cost further increases as the dimensionality (number of variables) increases. For example, the big (large-scale) benchmark optimization problems proposed in the CEC2013 competition (Xiaodong et al., 2013) are utilized in this paper for experiments, they can take several hundred hours using a personal computer before being solved. Also, the landscape of a problem has an important influence on the complexity of the problem which might require additional iterations, i.e. additional computation, to find the optimal solution (Alba et al., 2013). Population-based metaheuristics are ideal for parallelization for two reasons. 1) The candidate solutions (individuals) are independent from one another which makes easy the distribution/parallelization of such solutions to generate new ones. 2) The cooperation among individuals of a population generate solution of a better quality (Talbi, 2013; Monemi et al., 2017).

Figure 1.

Master-Slave model of PPSO


Complete Article List

Search this Journal:
Open Access Articles
Volume 13: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 12: 4 Issues (2021): 2 Released, 2 Forthcoming
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing