A Novel Evolutionary Algorithm for Multidimensional Knapsack Problem

A Novel Evolutionary Algorithm for Multidimensional Knapsack Problem

Sara Sabba, Salim Chikhi
DOI: 10.4018/ijoris.2015040101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Binary optimization problems are in the most case the NP-hard problems that call to satisfy an objective function with or without constraints. Various optimization problems can be formulated in binary expression whither they can be resolved in easier way. Optimization literature supplies a large number of approaches to find solutions to binary hard problems. However, most population-based algorithms have a great tendency to be trapped in local optima particularly when solving complex optimization problems. In this paper, the authors introduce a new efficient population-based technique for binary optimization problems (that we called EABOP). The proposed algorithm can provide an effective search through a new proposed binary mutation operator. The performance of our approach was tested on hard instances of the multidimensional knapsack problem. The obtained results show that the new algorithm is able of quickly obtaining high-quality solutions for most hard instances of the problem.
Article Preview
Top

Introduction

Combinatorial optimization holds a substantial place in computer sciences. Indeed, it results from the intersection between operational research, theoretical computing and mathematical programming. In fact, combinatorial optimization aims at finding the best solution among a finite number of choices by minimizing or maximizing an optimization function with or without constraints. Thus, combinatorial optimization problems are often complex and NP-hard, they are characterized by the exponential number of combinations to be explored, which requires considerable time to find a good solution.

The necessity to quickly find reasonable solutions to many of these problems has led to the development of several approximation algorithms which include metaheuristics. Metaheuristics are high level strategies that use different approaches for exploring search spaces and find near optimal solutions in a reasonable time. Some metaheuristics are trajectory-based methods. Since, they work with a single search point (or one solution at a time) and adapt it stochastically to find a new, hopefully better solution. Other metaheuristics are population-based methods. Hence, they operate on a population of solutions in parallel. Therefore, they can simultaneously explore different areas of the search space using some operators (such as crossover and/or mutation operators in genetic algorithm) or roles (like in ant colony optimization, bee colony optimization, particle swarm optimization, etc.). Population-based algorithms also called evolutionary algorithms (EAs) have been provided more efficient search for complex engineering problems.

The main factors of any metaheuristic are: diversification (exploration) and intensification (exploitation) (Blum & Roli, 2003). Diversification means to explore diverse regions in the search space in the aim of finding promising regions. Intensification means to exploit intensively the region of the current solution in the aim of finding a new optimized solution. The first component helps the algorithm to avoid the premature convergence while the second helps it to get optimized solutions. The good tradeoff between these two components is a big challenge in the design of useful metaheuristics.

Decision variable representation for optimization algorithms depends to the problem search space defined by the problem. Optimization problems can broadly be divided into two main categories. The problem is continuous if decision variables assume real values, and called discrete if decision variables take discrete (typically integer) values. In fact, various real-world problems have binary-valued solutions or can be formulated so, e.g., Design Debugging, Scheduling, Bioinformatics, Capital Budgeting, etc. Moreover, researchers often cast continuous and discrete problems in binary terms whither they can be resolved in easier way. Binary optimization problems can be unconstrained or constrained.

Many efficient population-based algorithms have been developed to solve binary optimization problems. We cite genetic algorithms (GAs) and evolutionary programming (EP). We find that a large set of efficient optimization algorithms created for solving continuous optimization problems. These approaches have then be adapted to solve binary problems. For example, Kennedy and Eberhart have modified the original (Kennedy & Eberhart, 1995) particle swarm optimization (PSO) algorithm to solve binary problems (Kennedy & Eberhart, 1997). Similarly, the artificial bee colony (ABC) algorithm (Yang, 2005), the harmony search (HS) algorithm (Yang, 2009) and the cuckoo search (CS) algorithm (Yang & Deb, 2009) also have their binary versions (Gherboudj, Layeb & Chikhi, 2012; Pamparà, & Engelbrecht, 2011). Unfortunately, such adaptations could change the behavior of the algorithm, potentially degrading the performance of the original algorithm (Pamparà, 2012).

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 14: 1 Issue (2023)
Volume 13: 2 Issues (2022)
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing