A Novel Multi-Objective Competitive Swarm Optimization Algorithm

A Novel Multi-Objective Competitive Swarm Optimization Algorithm

Prabhujit Mohapatra, Kedar Nath Das, Santanu Roy, Ram Kumar, Nilanjan Dey
Copyright: © 2020 |Pages: 16
DOI: 10.4018/IJAMC.2020100106
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this article, a new algorithm, namely the multi-objective competitive swarm optimizer (MOCSO), is introduced to handle multi-objective problems. The algorithm has been principally motivated from the competitive swarm optimizer (CSO) and the NSGA-II algorithm. In MOCSO, a pair wise competitive scenario is presented to achieve the dominance relationship between two particles in the population. In each pair wise competition, the particle that dominates the other particle is considered the winner and the other is consigned as the loser. The loser particles learn from the respective winner particles in each individual competition. The inspired CSO algorithm does not use any memory to remember the global best or personal best particles, hence, MOCSO does not need any external archive to store elite particles. The experimental results and statistical tests confirm the superiority of MOCSO over several state-of-the-art multi-objective algorithms in solving benchmark problems.
Article Preview
Top

Introduction

The general form of a multi-objective problem can be represented as follows:IJAMC.2020100106.m01 subject to IJAMC.2020100106.m02IJAMC.2020100106.m03(1) where IJAMC.2020100106.m04 is decision variable in the search space IJAMC.2020100106.m05. The objective vector IJAMC.2020100106.m06 maps the decision variable IJAMC.2020100106.m07 in IJAMC.2020100106.m08 to IJAMC.2020100106.m09 number of objective functions in the objective space IJAMC.2020100106.m10.

In real life, many optimization problems as defined in (1) often need to optimize conflicting objectives simultaneously. Optimization of one objective often costs the other objectives due to the trade-offs between them. Hence, it is not possible to find a single solution to optimize all the objectives all together. These types of problems are called multi-objective optimization problems. The optimal trade-off solutions are called Pareto-optimal solutions. In real, it is not possible to collect all the pareto-optimal solutions, hence an approximation of all these pareto-optimal solution called the Pareto-front is formed. These problems are usually treated as a different type of optimization problem and there is always need of different techniques to handle these types of problems. The classical optimization techniques usually convert the whole multi-objective optimization problem into number of single objective optimization problems to solve them separately. It then collects one trade-off solution in each single objective problem to form the Pareto-front. The difficulty of the methodology is that it has to execute several times to collect different Pareto-optimal solutions in each run. Hence, it is always inspiring to bring together all the Pareto-optimal solutions in a single run. Therefore, evolutionary algorithms due to the population-based nature are one such positivity to bring all the Pareto-optimal solutions in a single simulation run.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing