A Hybrid Optimization Algorithm for Single and Multi-Objective Optimization Problems

A Hybrid Optimization Algorithm for Single and Multi-Objective Optimization Problems

Rizk M. Rizk-Allah, Aboul Ella Hassanien
Copyright: © 2017 |Pages: 31
DOI: 10.4018/978-1-5225-2229-4.ch021
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter presents a hybrid optimization algorithm namely FOA-FA for solving single and multi-objective optimization problems. The proposed algorithm integrates the benefits of the fruit fly optimization algorithm (FOA) and the firefly algorithm (FA) to avoid the entrapment in the local optima and the premature convergence of the population. FOA operates in the direction of seeking the optimum solution while the firefly algorithm (FA) has been used to accelerate the optimum seeking process and speed up the convergence performance to the global solution. Further, the multi-objective optimization problem is scalarized to a single objective problem by weighting method, where the proposed algorithm is implemented to derive the non-inferior solutions that are in contrast to the optimal solution. Finally, the proposed FOA-FA algorithm is tested on different benchmark problems whether single or multi-objective aspects and two engineering applications. The numerical comparisons reveal the robustness and effectiveness of the proposed algorithm.
Chapter Preview
Top

Introduction

A nonlinear programming problem forms an important part of any problem in engineering and frequently appears in the real world applications. Traditionally, optimization techniques have been classified into two classes: direct search and indirect search techniques. For direct search methods, the objective function and constraint value are used to guide the search, while the first and/or second-order derivatives of the objective function and/or constraints are used to guide the search process in the indirect search methods. Since derivative information is not used, the direct search methods slowly converge to an optimal solution whereas the indirect search methods converge faster to an optimal solution. However, for efficient implementation of the traditional techniques, the variables and objective function need to be continuous. Furthermore, the success of these methods depends up on the quality of the starting point. In many optimization problems, discontinuous, vast multimodal, and noisy search spaces need to be considered. As a result, we have witnessed a very rapid growth of the metaheuristic algorithms for handling complex nonlinear optimization problems in recent years.

As an alternative to the conventional optimization techniques, the metaheuristic optimization techniques have been used to obtain global or near-global optimum solutions. The metaheuristic optimization techniques have many advantages in comparison with the traditional nonlinear programming techniques, among which the following three are the most important: (i) they can use to solve wide range optimization problems including discontinuous, non-differentiable and non-convex objective functions and/or constraints. (ii) They do not use the gradient information about the cost function and the constraints. (iii) They have ability to escape from local optima. Many metaheuristic algorithms such as, genetic algorithm (GA) (Deep, 2008), particle swarm optimization (PSO) (He,2007; Xu,2015; Lu,2015, Jagatheesan,2015), differential evolution (Becerra,2006;Draa,2015), joint operations algorithm (Sun,2016), ant colony optimization (Rizk-Allah,2014), quantum particle swarm optimization (Soliman,2016), bee colony swarm optimization (Hassanien,2015), flower pollination search algorithm (Emary,2016) and grey wolf optimization (Emary,2016) Anti-lion Optimization Algorithm (Yamany et al., 2015), Bat Optimization Algorithm (Fouad et al., 2016), have shown their efficacy in solving computationally intensive problems.

Complete Chapter List

Search this Book:
Reset