Differential Evolution with Self-Adaptation

Differential Evolution with Self-Adaptation

Janez Brest (University of Maribor, Slovenia)
Copyright: © 2009 |Pages: 6
DOI: 10.4018/978-1-59904-849-9.ch074
OnDemand PDF Download:


Many practical engineering applications can be formulated as a global optimization problem, in which objective function has many local minima, and derivatives of the objective function are unavailable. Differential Evolution (DE) is a floating-point encoding evolutionary algorithm for global optimization over continuous spaces (Storn & Price, 1997) (Liu & Lampinen, 2005) (Price, Storn & Lampinen, 2005) (Feoktistov, 2006). Nowadays it is used as a powerful global optimization method within a wide range of research areas. Recent researches indicate that self-adaptive DE algorithms are considerably better than the original DE algorithm. The necessity of changing control parameters during the optimization process is also confirmed based on the experiments in (Brest, Greiner, Boškovic, Mernik, Žumer, 2006a). DE with self-adaptive control parameters has already been presented in (Brest et al., 2006a). This chapter presents self-adaptive approaches that were recently proposed for control parameters in DE algorithm.
Chapter Preview


Differential Evolution

DE creates new candidate solutions by combining the parent individual and several other individuals of the same population. A candidate replaces the parent only if it has better fitness value.

The population of the original DE algorithm (Storn & Price, 1995) (Storn & Price, 1997) contains NP D-dimensional vectors: xi,G, i = 1, 2, …, NP. G denotes the generation. The initial population is usually selected uniform randomly between the lower and upper bounds. The bounds are specified by the user according to the nature of the problem. After initialization DE performs several vector transforms (operations): mutation, crossover, and selection.

Mutant vector vi,G can be created by using one of the mutation strategies (Price et al., 2005). The most useful strategy is ‘rand/1’: vi,G = xr1,G + F × (xr2,G - xr3,G), where F is the mutation scale factor within range [0, 2], usually less than 1. Indexes r1, r2, r3 represent the random and distinct integers generated within range [1, NP], and also different from index i.

After mutation, a ‘binary’ crossover operation forms the trial vector ui,G, according to the ith population vector and its corresponding mutant vector vi,G:if (randCRorj = jrand) thenui,j,G = vi,j,Gelseui,j,G = xi,j,G, where i = 1, 2, …, NP and j = 1, 2, …, D. CR is the crossover parameter or factor within the range [0,1] and presents the probability of creating parameters for the trial vector from the mutant vector. Uniform random value rand is within [0, 1]. Index jrand ∈ [1, NP] is a randomly chosen index and is responsible for the trial vector containing at least one parameter from the mutant vector.

The selection operation selects, according to the objective fitness value of the population vector xi,G and its corresponding trial vector ui,G, which vector will survive to be a member of the next generation.

The original DE has more strategies and Feoktistov (Feoktistov, 2006) proposed some general extensions to DE strategies. The question is which strategy is the most suitable to solve a particular problem. Recently some researchers used various combinations of two, three or even more strategies during the evolutionary process.

Key Terms in this Chapter

Area of the Search Space: Set of specific ranges or values of the input variables that constitute a subset of the search space.

Control Parameter: Control parameter determines behaviour of evolutionary program (e.g. population size).

Search Space: Set of all possible situations of the optimization problem that we want to solve.

Individual: An individual represents a candidate solution. During the optimization process an evolutionary algorithm usually uses a population of individuals to solve a particular problem.

Differential Evolution: An evolutionary algorithm for global optimization, which realized the evolution of a population of individuals in a manner that uses of differences between individuals.

Evolutionary Computation: Solution approach guided by biological evolution, which begins with potential solution models, then iteratively applies algorithms to find the fittest models from the set to serve as inputs to the next iteration, ultimately leading to a model that best represents the data.

Self-Adaptation: The ability that allows an evolutionary algorithm to adapt itself to any general class of problems, by reconfiguring itself accordingly, and do this without and user interaction.

Complete Chapter List

Search this Book: