Article Preview
TopIntroduction
Combinatorial optimization holds a substantial place in computer sciences. Indeed, it results from the intersection between operational research, theoretical computing and mathematical programming. In fact, combinatorial optimization aims at finding the best solution among a finite number of choices by minimizing or maximizing an optimization function with or without constraints. Thus, combinatorial optimization problems are often complex and NP-hard, they are characterized by the exponential number of combinations to be explored, which requires considerable time to find a good solution.
The necessity to quickly find reasonable solutions to many of these problems has led to the development of several approximation algorithms which include metaheuristics. Metaheuristics are high level strategies that use different approaches for exploring search spaces and find near optimal solutions in a reasonable time. Some metaheuristics are trajectory-based methods. Since, they work with a single search point (or one solution at a time) and adapt it stochastically to find a new, hopefully better solution. Other metaheuristics are population-based methods. Hence, they operate on a population of solutions in parallel. Therefore, they can simultaneously explore different areas of the search space using some operators (such as crossover and/or mutation operators in genetic algorithm) or roles (like in ant colony optimization, bee colony optimization, particle swarm optimization, etc.). Population-based algorithms also called evolutionary algorithms (EAs) have been provided more efficient search for complex engineering problems.
The main factors of any metaheuristic are: diversification (exploration) and intensification (exploitation) (Blum & Roli, 2003). Diversification means to explore diverse regions in the search space in the aim of finding promising regions. Intensification means to exploit intensively the region of the current solution in the aim of finding a new optimized solution. The first component helps the algorithm to avoid the premature convergence while the second helps it to get optimized solutions. The good tradeoff between these two components is a big challenge in the design of useful metaheuristics.
Decision variable representation for optimization algorithms depends to the problem search space defined by the problem. Optimization problems can broadly be divided into two main categories. The problem is continuous if decision variables assume real values, and called discrete if decision variables take discrete (typically integer) values. In fact, various real-world problems have binary-valued solutions or can be formulated so, e.g., Design Debugging, Scheduling, Bioinformatics, Capital Budgeting, etc. Moreover, researchers often cast continuous and discrete problems in binary terms whither they can be resolved in easier way. Binary optimization problems can be unconstrained or constrained.
Many efficient population-based algorithms have been developed to solve binary optimization problems. We cite genetic algorithms (GAs) and evolutionary programming (EP). We find that a large set of efficient optimization algorithms created for solving continuous optimization problems. These approaches have then be adapted to solve binary problems. For example, Kennedy and Eberhart have modified the original (Kennedy & Eberhart, 1995) particle swarm optimization (PSO) algorithm to solve binary problems (Kennedy & Eberhart, 1997). Similarly, the artificial bee colony (ABC) algorithm (Yang, 2005), the harmony search (HS) algorithm (Yang, 2009) and the cuckoo search (CS) algorithm (Yang & Deb, 2009) also have their binary versions (Gherboudj, Layeb & Chikhi, 2012; Pamparà, & Engelbrecht, 2011). Unfortunately, such adaptations could change the behavior of the algorithm, potentially degrading the performance of the original algorithm (Pamparà, 2012).