Load Balancing for the Dynamic Distributed Double Guided Genetic Algorithm for MAX-CSPs

Load Balancing for the Dynamic Distributed Double Guided Genetic Algorithm for MAX-CSPs

Sadok Bouamama, Khaled Ghedira, Nisrine Zaier
Copyright: © 2010 |Pages: 19
DOI: 10.4018/jalr.2010100105
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The Dynamic Distributed Double Guided Genetic Algorithm (D3G2A) deals with Maximal Constraint Satisfaction Problems. The approach consists in creating agents cooperating together to solve problems. This paper aims to improve the D3G2A. The main purpose is to balance agent loads this distributed approach. The proposed approach will redistribute the load of Species agents more equally in order to improve the CPU time. This improvement allows not only reduction of the number of Species agents but also decreases communications agents cost. In this regard, a sub-population is composed of chromosomes violating a number of constraints in the same interval. Secondly, another proposed approach will redistribute the work load. This improvement allows diminution of inactive Species agents and it results in a balanced workloads. In fact, by analogy with social animal flocks, Species agents cooperate together to do all tasks in a reduced CPU time. Several comparisons are made about new approaches with the old version of the D3G2A. Results are promising.
Article Preview
Top

Introduction

The constraint satisfaction problems (CSPs) formalism allow formulating and resolving many problems in artificial intelligence and operational research such as scheduling problems, resource allocation, frequency allocation, Vehicle routing problem etc.

A CSP is a triple (X, D, C) where.

  • – X

    is a finite set of variables {x1, x2,…, xn}.

    • – D

      = {Dx1, Dx2, . . .,Dxn} set of domains of possible values of variables belongs to X.

      • – C

        is a finite set of constraints of variables in X.

To resolve a CSP, we need an instantiation of all variables with values from their respective domains. This instantiation should satisfy all constraints to get a good solution. To find a CSP solution is often a hard task. In fact, with some real life problems, the solution is either too expensive to get or even nonexistent. In addition, in some cases we need to express preferences and evolution of constraints in time. For such tasks, we should find an instantiation that maximizes the number of constraints. This CSP extension, said Max-CSPs, is the topic of this paper.

Max CSPs have been dealt with by complete or incomplete methods. Complete methods such as forward checking algorithm (Freuder & Wallace, 1992) and branch and bound algorithms (Meseguer, 1997; Wallace, 1996) are shown to be able to achieve an optimal solution. Nevertheless, the combinatorial explosion is an obstacle faced by theses systematic search methods. In fact, to ensure that the solution found is the optimal, systematic search algorithms would need to exhaust the entire problem space to establish that fact (Lau & Tsang, 2001).

However, incomplete ones are not able to guarantee that a solution can be found, and neither can they prove that a solution does not exist. They forgo completeness for efficiency. Many works such as (Langley, 1992) demonstrated on several large problems that what complete search algorithms fail to solve, incomplete ones efficiently conquer.

In fact, incomplete methods such as Simulated Annealing, Genetic Algorithms and Tabu Search allow escaping from local optima. Moreover, another incomplete but distributed method known as Distributed Simulated Annealing (DSA) has been shown to be efficiently applied to static Max-CSP, dynamic Max-CSP (Ghédira, 1994), Resource allocation problem (Bouamama & Ghedira, 2006). Distributing the centralized SA outperformed the solution's quality. Genetic Algorithm (GA) is a further well known incomplete method. It was frequently used in hard optimization problems (Tsang, EPK. 1993) solving multiprocessor scheduling problems (Bouamama & Ghedira, 2006). Similarly to distributing DSA, Centralized Genetic Algorithms (CGAs), known to be expensive, was also distributed to get better performances. The result was the Distributed Guided Genetic Algorithm (DGGA) (Jlifi & Ghédira, 2003). DGGA was enhanced by a new parameter called guidance operator. The latter allowed not only diversification but also an escaping from local optima. The result is referred to as Distributed Double Guided Genetic Algorithm (D2G2A) (Bouamama & Ghedira, 2008). Moreover, a further Improvement was performed through a Dynamic aspect in D3G2A (Bouamama & Ghedira, 2006; Zaier, Bouamama, & Ghédira, 2007).

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 8: 2 Issues (2018)
Volume 7: 2 Issues (2017)
Volume 6: 2 Issues (2016)
Volume 5: 1 Issue (2015)
Volume 4: 1 Issue (2014)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing