Nature Inspired Methods for Multi-Objective Optimization

Nature Inspired Methods for Multi-Objective Optimization

Sanjoy Das (Kansas State University, USA), Bijaya K. Panigrahi (Indian Institute of Technology, India) and Shyam S. Pattnaik (National Institute of Technical Teachers’ Training & Research, India)
DOI: 10.4018/978-1-60566-766-9.ch004
OnDemand PDF Download:
No Current Special Offers


This chapter focuses on the concepts of dominance and Pareto-optimality. It then addresses key issues in applying three basic classes of nature inspired algorithms – evolutionary algorithms, particle swarm optimization, and artificial immune systems, to multi-objective optimization problems. As case studies, the most significant multi-objective algorithm from each class is described in detail. Two of these, NSGA-II and MOPSO, are widely used in engineering optimization, while the others show excellent performances. As hybrid algorithms are becoming increasingly popular in optimization, this chapter includes a brief discussion of hybridization within a multi-objective framework.
Chapter Preview

Basic Concepts Of Multi-Objective Optimization

Let the search space of the multi-objective problem (i.e. the space of all solutions) be denoted as 978-1-60566-766-9.ch004.m01, where N is the dimension of each solution vector. Without loss of generality, it shall be assumed that the multi-objective optimization problem involves the simultaneous minimization of M objectives, f1, f2, … fM. The objective function space 978-1-60566-766-9.ch004.m02 is defined as the set 978-1-60566-766-9.ch004.m03. It should be noted that typically, the dimensionality of the objective function space, M, is much lower than that of the search space, N. The relationship between the search space and the objective function space is illustrated in Figure 1. Below, for a simple case where N = 3 and M = 2.

Figure 1.

Relationship between the search space (Σ) and the objective function space (Ω) showing a finite set of dominated and non-dominated solutions.


Key Terms in this Chapter

Population Based Algorithm: An algorithm that maintains an entire set of candidate solutions, each solution corresponding to a unique point in the search space of the problem.

Exploitation: A greedy strategy that seeks to improve one or more solutions to an optimization problem to take it to their closest optima.

Elitism: A strategy in evolutionary algorithms where the best one or more solutions, called the elites, in each generation, are inserted into the next, without undergoing any change. This strategy usually speeds up the convergence of the algorithm. In a multi-objective framework, any non-dominated solution can be considered to be an elite.

Objective Function: The function that is to be optimized. In multi-objective optimization, several such functions need to be optimized simultaneously.

Local Search: A search algorithm to carry out exploitation. Pure local search algorithms are prone to getting trapped in local optima, without converging to good solutions.

Stochastic Algorithm: A non-deterministic algorithm, which relies on probabilistic operations. Natural algorithms that borrow from natural metaphors are almost always stochastic algorithms.

Fitness Landscape: A representation of the search space of an optimization problem that brings out the differences in the fitness of the solutions, such that those with good fitness are “higher”. Optimal solutions are the minima of the fitness landscape.

Fitness: A measure that is used to determine the goodness of a solution for an optimization problem. In minimization problems, it is inversely related to he objective function.

Exploration: A strategy that samples the fitness landscape extensively to obtain good regions.

Generation: A term used in nature inspired algorithms that roughly corresponds to each iteration of the outermost loop, where the existing population is replaced with a new one.

Complete Chapter List

Search this Book: