Mathematical Optimization by Using Particle Swarm Optimization, Genetic Algorithm, and Differential Evolution and Its Similarities

Mathematical Optimization by Using Particle Swarm Optimization, Genetic Algorithm, and Differential Evolution and Its Similarities

Shailendra Aote, Mukesh M. Raghuwanshi
DOI: 10.4018/978-1-7998-8048-6.ch001
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

To solve the problems of optimization, various methods are provided in different domain. Evolutionary computing (EC) is one of the methods to solve these problems. Mostly used EC techniques are available like Particle Swarm Optimization (PSO), Genetic Algorithm (GA) and Differential Evolution (DE). These techniques have different working structure but the inner working structure is same. Different names and formulae are given for different task but ultimately all do the same. Here we tried to find out the similarities among these techniques and give the working structure in each step. All the steps are provided with proper example and code written in MATLAB, for better understanding. Here we started our discussion with introduction about optimization and solution to optimization problems by PSO, GA and DE. Finally, we have given brief comparison of these.
Chapter Preview
Top

1. Introduction

Problem solving is one of the most complicated intellectual activities of the human brain. The process of problem solving deals with finding the solution in the presence of the constraints. An exact solution to some problems might simply be infeasible, especially if it has larger dimensionality. In those problems, solution near to the exact value might be deemed very good and sufficient. Knapsack problem, linear programming problem is an example of an optimization problem. Another example of an optimization problem is to arrange the transistors on a computer chip, so that it will occupy the smallest area and the number of components it will used are as few as possible. Optimization is generally used in many problems like Scheduling Problems, Resource Allocation, Decision Making, and Industrial Planning. Furthermore, these optimization techniques cover large application areas in business, industry, and engineering and computer science. Due to simplicity, involvement of less parameter and fast convergence, Many real world problems like Economic Dispatch (Selvakumar & Thanushkodi, 2007), Scheduling Problems (Pongchairerks, 2009), Textual Entailment Recognition (Mehdad & Magnini, 2009), Term Extraction (Mehdad & Magnini, 2009), Intelligent Web Caching (Syafrullah & Salim, 2010), Text Feature Selection (Sulaiman, Shamsuddin, Forkan, & Abraham, 2008) etc. are solved using these algorithms.

1.1. Basic Concepts

Optimization means problems solving in which one tries to find a minimum or maximum value of the given function by systematically choosing the values of real or integer variables from given set. The representation of an optimization problem is as follows,

  • Given: Consider a function f, which maps from set X to the real numbers.

If an element xi in X such that f (xi) ≥ f(x) for all other x in X is called maximization or such that f (xi) ≤ f(x) for all other x in X is called minimization. And our aim is to find xi.

Let, R is the Euclidean space and X is a subset of R. Every member of X has to satisfy the set of constraints, equalities or inequalities. The search space is defined as the domain X of f, while the elements of X are represented as feasible solutions. The function f is called as an objective function. An optimal solution is a feasible solution that minimizes (or maximizes) the value of the objective function.

In case of an optimization of a single objective function f, an optimum value is either its maximum or minimum. This is depended on the nature of the problem. This optimum must satisfy the constraints. Consider the process of manufacturing plant: we have to assign incoming orders to machines, so that it minimizes the time needed to complete them. On the other hand, we will arrange the employment of staff, the placing of commercials and the purchase of raw materials in a way that maximizes our profit. In global optimization, the optimization problems are mostly defined as minimization problems. A global optimum is a value, which is the best among all the values in whole domain X while a local optimum is an optimum of only a subset of X. If anyone wants to minimize the solution, then it tries to get a global minimum point in the search space. It is possible to get local minimum during the search, which diverts the pointer from achieving a better solution. This problem is referred as local minima in Artificial Intelligence (AI). AI also defines the problems in the optimization as plateau and ridge.

Optimization problems are mainly classified as a single objective optimization (SOO) and multiobjective optimization (MOO). Here our aim is to deal with the single objective optimization problems. SOO Problems are further classified as Unimodal and Multimodal Optimization, whereas other types of classes are Single dimensional and multidimensional problems. Difficulty in solving the problems increases from unimodal to multimodal as well as from single dimensional to multidimensional problems. If the problem is multimodal and multidimensional, then optimum solution is presently far away from the real solution.

Complete Chapter List

Search this Book:
Reset