Many-Objective Particle Swarm Optimization Algorithm Based on New Fitness Allocation and Multiple Cooperative Strategies

Many-Objective Particle Swarm Optimization Algorithm Based on New Fitness Allocation and Multiple Cooperative Strategies

Weiwei Yu, Li Zhang, Chengwang Xie
DOI: 10.4018/IJCINI.20211001.oa29
Article PDF Download
Open access articles are freely available for download

Abstract

Many-objective optimization problems (MaOPs) refer to those multi-objective problems (MOPs) withmore than three objectives. In order to solve MaOPs, a multi-objective particle swarm optimization algorithm based on new fitness assignment and multi cooperation strategy(FAMSHMPSO) is proposed. Firstly, this paper proposes a new fitness allocation method based on fuzzy information theory to enhance the convergence of the algorithm. Then a new multi criteria mutation strategy is introduced to disturb the population and improve the diversity of the algorithm. Finally, the external files are maintained by the three-point shortest path method, which improves the quality of the solution. The performance of FAMSHMPSO algorithm is evaluated by evaluating the mean value, standard deviation and IGD+ index of the target value on dtlz test function set of different targets of FAMSHMPSO algorithm and other five representative multi-objective evolutionary algorithms. The experimental results show that FAMSHMPSO algorithm has obvious performance advantages in convergence, diversity and robustness.
Article Preview
Top

1. Introduction

In today's scientific research and engineering practice, the problems faced by decision makers are becoming more and more complex, and often need to deal with multiple objectives at the same time. Usually, this type of problem that requires multiple objectives to achieve the optimal design is called Multi-objective Optimization Problem(Wang, Zhang, Li, Zhao et al, 2018; Wu et al., 2017)(MOP). Generally, the goals of the MOP problem conflict with each other, and it is often difficult to obtain the optimal solution to the problem. As the number of goals increases, this defect becomes more obvious. Therefore, there is no unique optimal solution for the MOP problem so that each target can obtain the optimal value at the same time, but a solution set consisting of a set of compromise solutions, that is, a Pareto solution set. Because the MOP problem model is highly complicated, it is difficult to make general analysis methods effective, So many researchers have done a lot of research(Nayyar, Garg, Gupta et al, 2018; Nayyar, Le, & Nguyen, 2018; Nayyar & Nguyen, 2018a), most of them use multi-objective optimization algorithm(Nayyar & Nguyen, 2018b; Nayyar & Singh, 2016) to obtain the approximate value of Pareto solution set. Particle Swarm Optimization(Kennedy & Eberhart, 1944)(PSO) is a heuristic group intelligent random algorithm for simulating the evolution process of natural biological groups. It has the advantages of fast convergence, simple parameter setting and easy programming. Because it has the characteristics of independent problem model, the adaptability of optimization process, the implicit parallelism and the robustness of solving complex nonlinear problems in single-objective optimization, some scholars have begun to apply it to Multi-objective optimization problems and widespread attention in the field of multi-objective optimization. So far, researchers have proposed a variety of multi-objective particle swarm optimization algorithms based on different research backgrounds and perspectives. Among them, D2MOPSO(Moubayed et al., 2014) and OMOPSO(Kaur & Kadam, 2018) algorithms based on decomposition ideas are some of the more classical algorithms. But most articles about particle swarms deal with low-dimensional multi-objective problems, that is the number of targets is generally 2-3. In reality, problems that require more and more targets to be optimized at the same time are constantly emerging. Researchers generally call the problems of simultaneously optimizing 4 or more targets as Many-objective optimization problem(Li, Liang, Yang et al, 2019)(MaOP). Compared with the MOP problem, the MaOP problem is more difficult to solve. The reason is as follows: For many-objective optimization problem, the search ability of these traditional algorithms is greatly reduced. As the target spatial dimension increases, the number of non-dominated solutions in the population will increase exponentially, even close to the entire Pareto frontier. For example, for the MaOP problem of m targets, the k solutions are distributed on each target, and mkm-1 solutions are needed to represent the Pareto frontier. Pareto dominance is a strong sorting relationship. When the dimension of the optimization problem is high, it will cause the evolutionary algorithm to generate a large number of non-dominated solutions in a finite-scale group. There is a lack of certain comparison criteria between these non-dominated solutions, making it difficult to choose from them, and the search ability of the algorithm will be greatly weakened. Visualization of non-dominated solution sets for many-objective optimization is also a problem. The commonly used Cartesian coordinate system can only represent 3D at most, and it is not suitable for high-dimensional multi-target problems, which creates obstacles for the final choice of decision makers. Because the Pareto dominance relationship is degraded in the high-dimensional target space, the distribution retention mechanism becomes the dominant factor of the algorithm selection individual, but the selection mechanism dominated by the individual density information may not be able to effectively drive the approximate Pareto front to approximate the real Pareto front. It may even have a negative impact on the optimization process. many-objective optimization needs to calculate more Pareto optimal solutions to approximate the Pareto frontier, so the complexity of the algorithm is also higher. Recent studies have also shown that the traditional Pareto-dominated PSO algorithm performs even worse than the random search algorithm when the number of optimization problems increases to 10 or more.

Complete Article List

Search this Journal:
Reset
Volume 18: 1 Issue (2024)
Volume 17: 1 Issue (2023)
Volume 16: 1 Issue (2022)
Volume 15: 4 Issues (2021)
Volume 14: 4 Issues (2020)
Volume 13: 4 Issues (2019)
Volume 12: 4 Issues (2018)
Volume 11: 4 Issues (2017)
Volume 10: 4 Issues (2016)
Volume 9: 4 Issues (2015)
Volume 8: 4 Issues (2014)
Volume 7: 4 Issues (2013)
Volume 6: 4 Issues (2012)
Volume 5: 4 Issues (2011)
Volume 4: 4 Issues (2010)
Volume 3: 4 Issues (2009)
Volume 2: 4 Issues (2008)
Volume 1: 4 Issues (2007)
View Complete Journal Contents Listing