Cooperative Co-Evolution and MapReduce: A Review and New Insights for Large-Scale Optimisation

Cooperative Co-Evolution and MapReduce: A Review and New Insights for Large-Scale Optimisation

A. N. M. Bazlur Rashid, Tonmoy Choudhury
Copyright: © 2021 |Pages: 34
DOI: 10.4018/IJITPM.2021010102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Real-word large-scale optimisation problems often result in local optima due to their large search space and complex objective function. Hence, traditional evolutionary algorithms (EAs) are not suitable for these problems. Distributed EA, such as a cooperative co-evolutionary algorithm (CCEA), can solve these problems efficiently. It can decompose a large-scale problem into smaller sub-problems and evolve them independently. Further, the CCEA population diversity avoids local optima. Besides, MapReduce, an open-source platform, provides a ready-to-use distributed, scalable, and fault-tolerant infrastructure to parallelise the developed algorithm using the map and reduce features. The CCEA can be distributed and executed in parallel using the MapReduce model to solve large-scale optimisations in less computing time. The effectiveness of CCEA, together with the MapReduce, has been proven in the literature for large-scale optimisations. This article presents the cooperative co-evolution, MapReduce model, and associated techniques suitable for large-scale optimisation problems.
Article Preview
Top

Introduction

There is a vast number of real-world optimisation problems classified as complex and large-scale because of their different characteristics, such as high-dimensions, massive volume, varieties of data, non-linear, or multi-modal (Shi et al., 2016). Examples of such optimisation problems include the domain of bio-medical (e.g., genomic and post-genomic studies), health sciences (Malik et al., 2018, Saliha, 2018, Haried et al., 2019, Strang and Sun, 2019), scheduling and planning problems (e.g., production and manufacturing systems), engineering design (e.g., lamination and casing optimisation for the electric motor), smart factories (e.g., cyber-physical systems), logistics and transportation (e.g., routing problems, or packing and cutting), smart grids, cities, and homes – energy-aware system (e.g., reduction of emissions from coal-fired power plants), and large-scale coal supply chain (Liu et al., 2019, Ahsan and Rahman, 2018). The availability of large-scale problems and their data offer the research community new opportunities to discover new insights. Hence, knowledge management, knowledge discovery, or decision-making from these large-scale data is essential using appropriate modelling and techniques (Wang and Meng, 2018, Farsäter and Olander, 2019, Hadad et al., 2013, Ahrari and Haghani, 2019).

Evolutionary algorithms (EAs) are the fundamental choice to tackle these optimisation problems. Within the literature, there are two sorts of approaches to EA: non-decomposition and decomposition. The performance of EA deteriorates when solving large-scale optimisation problems due to their associated features; hence, non-decomposition methods are not suitable for these problems (Sun et al., 2019). The main challenge to the large-scale optimisation problems is either triggered by complex behaviour of the objective function with the number of decision variables, or by the requirement of unacceptable computational time, for example, by a simulation model (Chung and Paredes, 2015, Rasoolimanesh et al., 2018). Further, due to the exponentially increased search space of large-scale optimisation problems, this results in a drop in local optima and eventually, fitness evaluation becomes computationally expensive (Omidvar et al., 2010).

Standard and traditional EAs or meta-heuristic algorithms cannot generate relevant results in a reasonable time with a vast search space. Distributed evolutionary algorithms (DEAs) provide opportunities to solve problems in less computational time with the help of a meta-heuristic algorithm, cooperative co-evolutionary algorithm (CCEA). DEA addresses a high-dimensional problem with a divide-and-conquer approach. DEA can decompose the large-scale optimisation problems into smaller sub-problems through the distributed co-evolution. The distributed environment allows a DEA to maintain the population diversity for preventing local optima; it also supports multi-objective search (Gong et al., 2015). The CCEA decomposes a large-scale problem into smaller sub-problems, evolves each sub-problem separately, and collaborates individuals from different sub-populations to build a complete solution for the problem. Accordingly, the fitness of an individual is evaluated based on the subjective fitness landscape. Here the evaluating individual interacts with other individuals from the rest of sub-populations, and the collaboration performance is assigned as fitness to the individual (Ebrahimpour et al., 2018). In particular, when a problem is non-separable, decomposition techniques impact the performance of the overall process. To deal with non-separable problems in a distributed environment, co-evolution and multi-agents DEA models are most suitable (Han and Trimi, 2018, Shou et al., 2019). These models can, however, be interfaced with other DEA models, such as master-slave (Dubreuil et al., 2006), island (Pierreval and Paris, 2000), cellular (Alba and Dorronsoro, 2005), hierarchical (Folino et al., 2008), or pool (Roy et al., 2009). Among DEA models, island models have been proven to improve the global search ability of traditional EAs, where many sub-populations can be deployed on isolated islands for maintaining more than one best individual. Further, individuals on different islands can evolve in various ways and islands interact with each other only when individuals from one island migrate to another island during a fixed interval (Gong et al., 2015).

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 3 Released, 1 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing