Multi-Objective Fish School Search

Multi-Objective Fish School Search

Carmelo J. A. Bastos-Filho, Augusto C. S. Guimarães
Copyright: © 2015 |Pages: 18
DOI: 10.4018/ijsir.2015010102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The authors propose in this paper a very first version of the Fish School Search (FSS) algorithm for Multi-Objective Optimization. The proposal allows the optimization of problems with two or more conflicting objectives. The authors incorporated the dominance concept within the traditional FSS operators, creating a new approach called Multi-objective Fish School Search, MOFSS. They also adapted the barycenter concept deployed in the original FSS, which was replaced by the set of existing solutions in an external archive created to store the non-dominated solutions found during the search process. From their results in the DTLZ set of benchmark functions, the authors observed that the MOFSS obtained a similar performance when compared to well-known and well-established multi-objective swarm-based optimization algorithms. They detected some convergence problems in functions with a high number of local Pareto fronts. However, adaptive schemes can be used in future work to overcome this weakness.
Article Preview
Top

1. Introduction

The primary goal of an optimization algorithm is to find the values for the input variables in order to achieve the optimal output values for the target optimization functions. These functions are commonly referred as fitness functions or objectives (Kennedy & Eberhart, 2001). In fact, one can identify three distinct components in optimization problems: objective functions (outputs), decision variables (inputs) and constraints (Engelbrecht, 2007; Coello et al, 2007).

Real world problems usually present multiple conflicting objectives to be optimized simultaneously. In these cases, there might exist more than one optimal solution. As an example, one solution s1 can present a better value for one objective function than another solution s2, but s1 can present a worse value for another objective when compared to s2. These solutions are said to be incomparable.

Thus, the definition of optimal solution deployed for mono-objective optimization is then replaced by the notion of dominance. According to Coello Coello (2007), the dominance concept can be defined as follows. One solution weakly dominates another solution, if this solution presents a better fitness in at least one objective function, but it is not worse in any one of the others. One solution strongly dominates another solution if this solution presents a better fitness in all the considered objective functions. The set of the best possible non-dominated solutions is called Pareto Optimal in the decision space, i.e. the space composed by the input variables. When this set is considered in the objective space, this set is known as Optimum Pareto Front (Branke et al, 2008; Collette and Siarry, 2003). The m-dimensional objective space is created by the m objectives of the problems (Coello Coello et al, 2007).

In Multi-objective problems (MOPs), two or more conflicting objectives must be tackled at the same time aiming to find feasible non-dominated solutions (Branke et al, 2008). Considering conflicting objectives, it might be the case that an improvement in one of the objectives may imply in the degradation in at least one of the other objectives. Because of this, the dominance concept must be deployed to allow the convergence of the algorithm. Algorithms designed to solve MOPs must include three goals: (i) find non-dominated solutions as close as possible to the Optimum Pareto Front of the problem to compose the Pareto Front; (ii) obtain extreme solutions for the Pareto Front with values as close as possible to the ones obtained if each one of the objectives is considered for optimization independently; and (iii) achieve equally spaced solutions within the Pareto Fronts in the objective space in order to present the entire curve for the decision maker.

In most of real world problems, the number of feasible solutions can be large and sometimes it is not possible to assess all the possibilities in order to find the best set of non-dominated solutions for a specific problem. In these cases, meta-heuristics can be deployed to solve this tough task. Some of these meta-heuristics mimic the natural functioning of the environment and the relationship among simple individuals. Swarm intelligence algorithms are inspired in the behavior of swarms of insects and simple animal, such as Ant Colonies – Ant Colony Optimization (Dorigo et al, 2006), Flocks of Birds – Particle Swarm Optimization (Clerc, 2010), Bee Colonies – Artificial Bee Colony Optimization (Karaboga and Basturk, 2007), among others. Swarm intelligence algorithms have been successfully deployed to solve optimization problems and some of them have obtained interesting and promising results, mainly due to the exploitation abilities for fine-tuning in optimization problems.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 3 Issues (2023)
Volume 13: 4 Issues (2022)
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing