A New Multiagent Algorithm for Dynamic Continuous Optimization

A New Multiagent Algorithm for Dynamic Continuous Optimization

Julien Lepagnot (Université de Paris 12, France), Amir Nakib (Université de Paris 12, France), Hamouche Oulhadj (Université de Paris 12, France) and Patrick Siarry (Université de Paris 12, France)
Copyright: © 2010 |Pages: 23
DOI: 10.4018/jamc.2010102602

Abstract

Many real-world problems are dynamic and require an optimization algorithm that is able to continuously track a changing optimum over time. In this paper, a new multiagent algorithm is proposed to solve dynamic problems. This algorithm is based on multiple trajectory searches and saving the optima found to use them when a change is detected in the environment. The proposed algorithm is analyzed using the Moving Peaks Benchmark, and its performances are compared to competing dynamic optimization algorithms on several instances of this benchmark. The obtained results show the efficiency of the proposed algorithm, even in multimodal environments.
Article Preview

1. Introduction

Recently, optimization in dynamic environments has attracted a growing interest, due to its practical relevance. Many real-world problems are dynamic optimization problems (DOPs), i.e. their objective function changes over time: typical examples are in vehicle routing (Larsen, 2000), inventory management (Minner, 2003) and scheduling (Branke & Mattfeld, 2005). For dynamic environments, the goal is not only to locate the optimum, but to follow it as closely as possible. A dynamic optimization problem can be expressed by:

(1) where is the objective function of the problem, denotes the equality constraint and denotes the inequality constraint. Both of them may change over time, denoted by . In this paper, we focus on a dynamic optimization problem with time constant constraints.

The main approaches to deal with DOPs can be classified into the following five groups (Jin & Branke, 2005):

  • 1.

    Generated diversity after a change: When a change in the environment is detected, explicit actions are taken to increase diversity and to facilitate the shift to the new optimum.

  • 2.

    Maintained diversity throughout the run: Convergence is avoided all the time and it is hoped that a spread-out population can adapt to change more efficiently.

  • 3.

    Memory-based approaches: These methods are supplied with a memory to be able to recall useful information from the past. In practice, they store good solutions in order to reuse them when a change is detected.

  • 4.

    Multipopulation approaches: Dividing up the population into several subpopulations, distributed on different optima, allows tracking of multiple optima simultaneously and increases the probability to find new ones.

  • 5.

    Future prediction: Recently, another kind of methods, trying to predict future changes, has attracted much attention (Rossi, Abderrahim, & Diaz, 2008), (Rossi, Barrientos, & Cerro, 2007), (Simoes & Costa, 2008). This approach is based on the fact that in a real problem, changes can follow some pattern that could be learned.

We focus on population-based metaheuristics, which are global search, generally bio-inspired, algorithms. We can roughly classify them in four categories: evolutionary algorithms (EAs), particle swarm optimization (PSO), ant colony optimization (ACO) and hybrid methods. We will now describe dynamic metaheuristics that have been proposed in the literature in each of these categories.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 10: 4 Issues (2019): Forthcoming, Available for Pre-Order
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing