Incorporation of Preferences in an Evolutionary Algorithm Using an Outranking Relation: The EvABOR Approach

Incorporation of Preferences in an Evolutionary Algorithm Using an Outranking Relation: The EvABOR Approach

Eunice Oliveira (Polytechnic Institute of Leiria, Portugal & R&D Unit INESC Coimbra, Portugal), Carlos Henggeler Antunes (University of Coimbra, Portugal & R&D Unit INESC Coimbra, Portugal) and Álvaro Gomes (University of Coimbra, Portugal & R&D Unit INESC Coimbra, Portugal)
Copyright: © 2014 |Pages: 24
DOI: 10.4018/978-1-4666-4253-9.ch004
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

The incorporation of preferences into Evolutionary Algorithms (EA) presents some relevant advantages, namely to deal with complex real-world problems. It enables focus on the search thus avoiding the computation of irrelevant solutions from the point of view of the practical exploitation of results (thus minimizing the computational effort), and it facilitates the integration of the DM’s expertise into the solution search process (thus minimizing the cognitive effort). These issues are particularly important whenever the number of conflicting objective functions and/or the number of non-dominated solutions in the population is large. In EvABOR (Evolutionary Algorithm Based on an Outranking Relation) approaches preferences are elicited from a decision maker (DM) with the aim of guiding the evolutionary process to the regions of the space more in accordance with the DM’s preferences. The preferences are captured and made operational by using the technical parameters of the ELECTRE TRI method. This approach is presented and analyzed using some illustrative results of a case study of electrical networks.
Chapter Preview
Top

2. Incorporation Of Preference Information In Eas

As in MOO mathematical programming algorithms, the incorporation of preferences into an EA can be done using one of the three main approaches classified in Horn (1997) as a priori, a posteriori and progressively (interactive).

In the a priori approach (Fonseca & Fleming, 1993; Deb, 1999) the preferences are elicited from the DM before the EA starts. A value (or utility) function is usually considered to transform the MOO problem into a scalar optimization problem, in which the single objective function embodies the preference expression parameters. A disadvantage usually pointed out to this approach lies in the fact that it is necessary to elicit all the preference information from the DM without knowledge of the possible alternatives, particularly in complex MOO mathematical models. Other drawbacks at stake in this “scalarization” process are the non-commensurability of objectives (which often cannot be adequately taken into account) and the existence of good compromise solutions in non-convex regions (which may not be effectively searched).

Complete Chapter List

Search this Book:
Reset