Comparison of Two Random Weight Generators for Multi-Objective Optimization

Comparison of Two Random Weight Generators for Multi-Objective Optimization

Victor M. Carrillo, German Almanza
DOI: 10.4018/978-1-4666-9779-9.ch011
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

There exist two general approaches to solve multiple objective problems. The first approach belongs to the classical mathematical methods: The weighted sum method, goal programming, or utility functions methods pertain to this approach. The output of mathematical methods is a single optimal solution. In the second approach are the heuristic methods, like the multiple objective evolutionary algorithms that offer the decision maker a set of optimal solutions usually called non- dominated or, Pareto-optimal solutions. This set is usually very large and the decision maker faces the problem of reducing the size of this set to a manageable number of solutions to analyze. In this paper the second approach is used to reduce the Pareto front using two weights generator for the non-numerical ranking preferences method and their performance is compared.
Chapter Preview
Top

1. Introduction

Most optimization problems in the real world are multi-objective and its functions are in conflict with each other. Multi-objective optimization problems appear in economics, biology, engineering, military, etc. For example, a high performance electronic product is a high cost item, and every customer is always looking for a device with high performance, but at the lowest rates. Therefore the development of a method for finding a solution to several optimization goals has been the subject of numerous research and guides documents (Branke, Kalyanmoy, Miettinen & Slowinsky, 2008). Currently, two main approaches are known to solve multi-objective optimization problems: the first is known as classic mathematical methods. Some examples of classical mathematical methods are: the weighted sum, weighted metric, rotated weighted metric, constraint method, goal programming, lexicographic method, and utility functions among others. Meanwhile the second approach involves populating a number of feasible solutions along the Pareto frontier, and the final solution is called the set of Pareto-optimal solutions, or the set of all optimal solutions of the multi-objective optimization problem. Some evolutionary methods are: Tabu-search, neural networks, particle swarm, ant colony and genetic algorithms, among others. The optimization procedure terminates once the Pareto front is obtained, but if the Pareto optimal size is very large the decision maker won't be able to select comfortably a manageable subset of the optimal solutions. The problem of obtaining a small subset of the Pareto-optimal front led to the development of methods for reducing the size of the Pareto fronts. Most of evolutionary methods obtain as a solution a set of Pareto optimal solutions based in the Pareto dominance concept. The idea of the Pareto-Front is to compare all solutions against each other, where the best fitted solutions dominate the weaker which in turn are said to be dominated (Zitzler & Thiele, 1999). The set of non-dominated solutions is known as the Pareto-Front.

Without loss of generality any multi-objective optimization problem can be presented as a minimization problem as shown in Equation 1

978-1-4666-9779-9.ch011.m01
(1) where n is the number of objective functions978-1-4666-9779-9.ch011.m02, n is the number of inequality constraints 978-1-4666-9779-9.ch011.m03978-1-4666-9779-9.ch011.m04 is a vector of design variables, and 978-1-4666-9779-9.ch011.m05is a vector of n objective functions, 978-1-4666-9779-9.ch011.m06.

The feasible or decision space is defined as978-1-4666-9779-9.ch011.m07 . The objective space is defined as the image of the feasible decision space X i.e. 978-1-4666-9779-9.ch011.m08. Having just a single solution that optimizes all objective functions simultaneously is difficult, and such a solution usually does not exist.

For example, consider the following bi-objective optimization problem Equation 2

978-1-4666-9779-9.ch011.m09
(2)

The image of the decision space under the objective functions is shown in Figure 1. From this perspective it is hard, if not impossible to decide which is or are the optimum values of the problem. The yellow points in the purple region represent optimal solutions or non-dominated points978-1-4666-9779-9.ch011.m10 and the yellow points in the blue region, are the corresponding arguments denoted by978-1-4666-9779-9.ch011.m11.

Complete Chapter List

Search this Book:
Reset