Multi-Objective Optimization Based on Brain Storm Optimization Algorithm

Multi-Objective Optimization Based on Brain Storm Optimization Algorithm

Yuhui Shi, Jingqian Xue, Yali Wu
Copyright: © 2013 |Pages: 21
DOI: 10.4018/ijsir.2013070101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In recent years, many evolutionary algorithms and population-based algorithms have been developed for solving multi-objective optimization problems. In this paper, the authors propose a new multi-objective brain storm optimization algorithm in which the clustering strategy is applied in the objective space instead of in the solution space in the original brain storm optimization algorithm for solving single objective optimization problems. Two versions of multi-objective brain storm optimization algorithm with different characteristics of diverging operation were tested to validate the usefulness and effectiveness of the proposed algorithm. Experimental results show that the proposed multi-objective brain storm optimization algorithm is a very promising algorithm, at least for solving these tested multi-objective optimization problems.
Article Preview
Top

1. Introduction

Generally speaking, a real world problem usually has many objectives which often conflict between each other. For example, in a demand side management system for smart grid, one objective is to increase sustainability of the smart grid while another objective is to reduce overall operational cost and carbon emission levels (Logenthiran, Srinivasan, & Shun, 2012). Therefore these real world problems are better to be represented as multi-objective optimization (MOP) problems instead of single objective optimization problems. How to effectively and efficiently solve multi-objective optimization problems has become a hot and popular research topic. A straightforward and natural approach to solve a multi-objective optimization problem is to weight all objectives and sum them together into a single objective optimization problem because solving single objective optimization problems has been relatively well and extensively studied, in which the weights can be fixed or dynamically changed (Parsopoulos & Vrahatis, 2002; Jin, Okabe, & Sendhoff, 2001). The second approach to tackle with multi-objective optimization problems is to solve each objective as a single objective optimization problem and all objectives are solved in turn according to the order of importance of objectives (Hu & Eberhart, 2002) or by sharing information among all single solvers from iteration to iteration (Parsopoulos, Tasoulis, & Vrahatis, 2004) in the hope that eventually they can come to a good enough solution for the multi-objective optimization problem. The above two approaches will come out a single solution to the multi-objective optimization problem to be solved.

In reality, due to the characteristic of a multi-objective optimization problem, its optimum solution is usually not unique, but consists of a set of candidate solutions among which no one solution is better than other solutions with regards to all objectives. This set of candidate solutions is called Pareto-optimal set of solutions of the multi-objective optimization problem. As a consequence, Pareto-based optimization algorithms are sought, studied, and preferred to be utilized to solve multi-objective optimization problems (Coello, 2006). One purpose of Pareto-based optimization algorithms is to find evenly distributed set of Pareto-optimal solutions.

During the last decades, a number of population-based methods, especially the so called evolutionary algorithms and swarm intelligence algorithms, have been successfully used to solve multi-objective optimization problems. For example, there are multiple objective genetic algorithm (MOGA) (Fonseca & Fleming, 1993), niched Pareto genetic algorithm (NPGA) (Horn & Nafpliotis, 1993), nondominated sorting genetic algorithm (NSGA, NSGA II) (Srinivas & Deb, 1994; Deb, Pratap, Agarwal, & Meyarivan, 2002), strength Pareto evolutionary algorithm (SPEA, SPEA II) (Zitzler & Thiele, 1999; Zitzler, Laumanns, & Thiele, 2001), multi-objective particle swarm optimization (MOPSO) (Ashwin, Kadkol, & Yen, 2012), to name just a few. Most of the above algorithms can improve the convergence and distribution of the Pareto-front more or less.

In any swarm intelligence algorithm, individuals represent only simple objects. These individuals, such as birds in particle swarm optimization (PSO) (Kennedy, Eberhart, & Shi, 2001), ants in ant colony optimization (ACO) (Dorigo, Maniezzo, & Colorni, 1996), bacteria in bacterial foraging optimization (BFO) (Passion, 2010), etc., cooperatively and collectively move toward the better and better areas in the solution space.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 3 Issues (2023)
Volume 13: 4 Issues (2022)
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing