Receive a 20% Discount on All Purchases Directly Through IGI Global's Online Bookstore

Arturo Berrones (Universidad Autónoma de Nuevo León, Mexico), Dexmont Peña (Universidad Autónoma de Nuevo León, Mexico) and Ricardo Sánchez (Universidad Autónoma de Nuevo León, Mexico)

DOI: 10.4018/978-1-59904-849-9.ch214

Chapter Preview

TopStochastic strategies for optimization are essential to most of the heuristic techniques used to deal with complex, unstructured global optimization problems (Pardalos, 2004). The roots of such methods can be traced back to the Metropolis algorithm (Metropolis, Rosenbluth, Rosenbluth, Teller & Teller, 1953), introduced in the early days of scientific computing to simulate the evolution of a physical system to thermal equilibrium. This process is the base of the simulated annealing technique (Kirkpatrick, Gellat & Vecchi, 1983), which makes use of the convergence to a global minimum in configurational energy observed in physical systems at thermal equilibrium as the temperature goes to zero.

The method presented in this contribution is rooted in similar physical principles as those on which simulated annealing type algorithms are based. However, in contrast with other approaches (Suykens, Verrelst & Vandewalle, 1998) (Gidas, 1995) (Parpas, Rustem & Pistikopoulos, 2006), the proposed method considers a density of points instead of Markov transitions of individual points. The technique is based in the interplay between Langevin and Fokker – Planck frameworks for stochastic processes, which is well known in the study of out of equilibrium physical systems (Risken, 1984) (Van Kampen, 1992). Fokker - Planck equation has been already proposed for its application in search algorithms, in several contexts. For instance, it has been used to directly study the convergence of populations of points to global minima (Suykens, Verrelst & Vandewalle, 1998), as a tool to demonstrate the convergence of simulated annealing type algorithms (Parpas, Rustem & Pistikopoulos, 2006) (Geman & Hwang, 1986), or as a theoretical framework for Boltzmann type learning machines (Movellan & McClelland, 1993) (Kosmatopoulos & Christodoulou, 1994). In the context of global optimization by populations of points, it has been proposed that the populations evolve under the time – dependent version of the Fokker – Planck equation, following a schedule for the reduction of the diffusion constant D (Suykens, Verrelst & Vandewalle, 1998).

Gibbs Sampler: A procedure to sample the marginal densities from a high dimensional distribution using one dimensional conditional probabilities.

Random Deviate Generation Process: A process which generates random numbers according to a specific probability distribution.

Configurational Energy: Refers to the potential energy associated with the various forces within the elements of a physical system.

Search Space: This is the set of all the feasible solutions for an optimization problem.

Learning Machine: This term refers to the development of techniques for automatic extraction of patterns from massive data sets and to the construction of deductive rules. In the context of this article, this concept deals with the automatic learning of densities in global optimization problems

Stochastic Search: Is an optimization algorithm which incorporate randomness in its exploration of the search space.

Thermal Equilibrium: State in which a physical system is described by probability measures that are time independent.

Diffusion Process: Random displacement of particles in a physical system due to the action of a temperature.

Heuristic: Is any algorithm that finds a good quality solution to a problem in a reasonable run time.

Diffusion Constant: Measures the degree of randomness in a diffusion process. The diffusion constant is proportional to the mean square distance moved by particles under diffusion in a given time interval.

Search this Book:

Reset

Copyright © 1988-2018, IGI Global - All Rights Reserved