Enhanced Chaotic Grey Wolf Optimizer for Real-World Optimization Problems: A Comparative Study

Enhanced Chaotic Grey Wolf Optimizer for Real-World Optimization Problems: A Comparative Study

Ali Asghar Heidari, Rahim Ali Abbaspour
DOI: 10.4018/978-1-5225-2990-3.ch030
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The gray wolf optimizer (GWO) is a new population-based optimizer that is inspired by the hunting procedure and leadership hierarchy in gray wolves. In this chapter, a new enhanced gray wolf optimizer (EGWO) is proposed for tackling several real-world optimization problems. In the EGWO algorithm, a new chaotic operation is embedded in GWO which helps search agents to chaotically move toward a randomly selected wolf. By this operator, the EGWO algorithm is capable of switching between chaotic and random exploration. In order to substantiate the efficiency of EGWO, 22 test cases from IEEE CEC 2011 on real-world problems are chosen. The performance of EGWO is compared with six standard optimizers. A statistical test, known as Wilcoxon rank-sum, is also conducted to prove the significance of the explored results. Moreover, the obtained results compared with those of six advanced algorithms from CEC 2011. The evaluations reveal that the proposed EGWO can obtain superior results compared to the well-known algorithms and its results are better than some advanced variants of optimizers.
Chapter Preview
Top

Introduction

Over the past ten years, several nature-inspired metaheuristic algorithms (MAs) have been developed based on different natural phenomena and philosophies (Nanda & Panda, 2014; Heidari, Ali Abbaspour, & Rezaee Jordehi, 2017b). These stochastic optimizers have been widely applied in different fields of science due to their exploration/exploitation capabilities, ease of use, convergence characteristics, and satisfactory optimality of the results (Mirjalili, 2016; Mirjalili, Mirjalili, & Hatamlou, 2016; Saremi, Mirjalili, & Mirjalili, 2015; L. Wang, Yang, & Orchard, 2016; Zhong, Li, & Zhong, 2016; Yalan Zhou et al., 2016). However, immature convergence and stagnation to local optima are some problems seen in the majority of these meta-heuristic algorithms (Guo, Shi, Chen, & Liang, 2017; Heidari, Mirvahabi, & Homayouni, 2015; S. Wang, Tian, Yu, & Lin, 2017; Yang & Deb, 2014). The search agents can cooperate locally and they demonstrate some emergent, self-organized motions that can lead algorithm to global convergence. These mobile agents naturally can exploit the solution space locally, supported by randomized operators. These stochastic mechanisms can enrich the diversity of the search agents on a global scale, and thus a proper transition from locally concentrated exploitation to global exploration has a constructive effect on the quality of the found solutions. An optimizer with better balance among exploration and exploitation tendencies is capable of outperforming other MAs. Any population-based optimizers have to make a balance between these two mechanisms; if not, efficacy can be restricted. Hence, scientists are often involved to alleviate the performance problems of MAs by modifying operators of available optimizers or inventing some new algorithms (Cai & Wang, 2015; Heidari & Delavar, 2016; Heidari, Kazemizade, & Abbaspour, 2015; Hu, Su, Yang, & Xiong, 2016; Mirjalili, 2016; Pickard, Carretero, & Bhavsar, 2016; Vafashoar & Meybodi, 2016).

The GWO technique is a new efficient population-based MA that was proposed by (Mirjalili, Mirjalili, & Lewis, 2014). Now, GWO is a well-known optimizer with several valuable characteristics over prior swarm-based approaches such as simplicity, flexibility, and sufficient local optima avoidance capacity. Moreover, its operations are very simple; and it has no initial user-defined parameters. In addition, it can show satisfactory convergence leanings over complex search spaces. This method is effective to tackle several optimization problems and each problem can be treated as the black box.

The GWO algorithm is capable of revealing an efficient performance in comparison with other well-established optimizers. However, a problem of concern is that the GWO may still face the problem of getting trapped in local optima points and immature convergence. The main reason behind these problems for not only the GWO optimizer but also other MAs is that they cannot often be quite successful throughout the global and local searching processes to make a fine balance amongst the exploration and exploitation tendencies in dealing with different classes of optimization tasks. The question is how we can advance this tradeoff?

Complete Chapter List

Search this Book:
Reset