Article Preview
TopIntroduction
Optimization is the process of finding and comparing feasible solutions, which become increasingly better, until no other better solution can be determined. This procedure minimizes or maximizes objective functions, or just objectives, subject to a set of constraints. When the process involves just one objective, it is intended to find just one solution of the mono-objective optimization problem. When an optimization problem involves more than one conflicting objective, the task of finding the solutions, which have a compromise relation, is known as multiobjective optimization. Many researches and real-world applications consider naturally multiple objectives (Fonseca & Fleming, 1995).
The evolutionary multiobjective optimization algorithms are important computational tools for solving problems with multiple conflicting objective functions. This fact is in part because the evolutionary algorithms determine a set of solutions which is an estimate of the Pareto optimal set of the problem in a single run. Moreover, evolutionary multiobjective optimization techniques can also treat objective functions in which the deterministic methods have difficulties such as the ones presenting discontinuity, non-convexity, multimodality, presence of noise and so forth. Most of the deterministic algorithms for multiobjective optimization only handle well behaved functions and find only one point at each execution, which may still not be in the Pareto optimal set. For these reasons, multiobjective evolutionary methods are more appropriate for the solution of this kind of problem. Some references about evolutionary multiobjective optimization algorithms are the works of Zitzler (1999), Deb (2001), Zitzler, Laumanns and Bleuler (2004), and Coello, Lamont, and Van Veldhuisen (2007).
In an usual evolutionary multiobjective algorithm, like NSGA-II (Deb, Pratap, Agarwal, & Meyarivan, 2002), a set of parent solutions is created at random and classified (sorted) by their dominance relation, followed by a niche computation that is used as a selection criteria when necessary. In the evolutionary loop, it is applied selection, crossover and mutation on the parent solutions to generate a new set of child solutions. The union of the parent and child set of solutions is classified in relation to their dominance level in Pareto front and niche values. The best solutions are chosen to be the parent population in the next generation. Thus, the evolutionary loop remains until a stop condition is attained. In the end, it is expected that the last set of parent solutions has converged as near as possible to the Pareto optimal set (Deb, 2001).
The classification of the solutions according to their dominance level, which is known as the nondominated sorting procedure, is an important tool in an elitist multiobjective evolutionary algorithm. This classification value can be used as a fitness measure. For example, suppose that solutions that belong to the first level of the Pareto front are assigned the value one, those that belong to second level receive the value two and so on. In a selection process, solutions that belong to smaller levels are preferred because they are better for the evolution in order to reach the Pareto optimal set. If two solutions are competing and are in the same level of dominance, the niche value can be used to solve the tie. If the draw persists a random selection can be done, at which both solutions have the same probability to be chosen (Deb, 2001).
There are many nondominated sorting algorithms to classify solutions according to their dominance level. The basic algorithm for nondominated classification presented by Deb et al. (2002) is very simple, but always compares each pair of solutions twice. In a basic implementation, its computational complexity is in any case, in which is the number of objectives and is the number of solutions. In this paper, it is concluded that the original version is not recommended in any situation. However, it is presented an improved version of this algorithm that compares each pair of solutions only once. Although the time complexity remains in the worst case, the best case is , which happens when the domination relation test requires always only two objective comparisons.