Scalable Differential Evolutionary Clustering Algorithm for Big Data Using Map-Reduce Paradigm

Scalable Differential Evolutionary Clustering Algorithm for Big Data Using Map-Reduce Paradigm

Zakaria Benmounah, Souham Meshoul, Mohamed Batouche
Copyright: © 2017 |Pages: 16
DOI: 10.4018/IJAMC.2017010103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

One of the remarkable results of the rapid advances in information technology is the production of tremendous amounts of data sets, so large or complex that available processing methods are inadequate, among these methods cluster analysis. Clustering becomes more challenging and complex. In this paper, the authors describe a highly scalable Differential Evolution (DE) algorithm based on map-reduce programming model. The traditional use of DE to deal with clustering of large sets of data is so time-consuming that it is not feasible. On the other hand, map-reduce is a programming model emerged lately to allow the design of parallel and distributed approaches. In this paper, four stages map-reduce differential evolution algorithm termed as DE-MRC is presented; each of these four phases is a map-reduce process and dedicated to a particular DE operation. DE-MRC has been tested on a real parallel platform of 128 computers connected with each other and more than 30 GB of data. Experimental results show the high scalability and robustness of DE-MRC.
Article Preview
Top

Introduction

The recent advent of human sequencing technologies has enabled the retrieval of different kind of biological data with high precision and accuracy at a single base pair (Kundaje et al., 2015). This enormous amount of data has transformed many sciences like biology and medicine from an experimental to quantitative sciences, bringing the hope of new era of computing which will help to unravel the hidden relationship among this data. However, this opportunity is surrounded by many obstacles, e.g., how to manage the complexity of data, how to combine the data from very different resources, and what kind of policies or criteria to be used when handling big data. These obstacles are challenging the boundaries of computer science and high performance computing.

Therefore, the big data requires tailored analysis methods, which are specially designed to meet their properties such as heterogeneity, and size. The size of the data is an important issue for analytics methodologies such as clustering (Costa, 2014). Clustering analysis is a fundamental data analysis approach used for knowledge discovery (Costa, 2014); it aims to organize data points into clusters that exhibit a high internal similarity to each other. An up-to-date review of the practical applications of data clustering can be found in (Nanda & Panda, 2014); most of these algorithms are becoming inappropriate due to their conventional assumptions about data. Indeed, the current datasets that need to be processed are big, heterogeneous, and more complex than what they used to be in the last decades. Recently, these challenging issues known as big data have motivated the design of new methods and algorithms (Kristensen et al. 2014); these methods are mainly based on parallel and distributed computing models and make use of dedicated platforms. Undeniably, a platform for parallel and distributed computing is a requirement to handle big data analysis (Sarkar et al., 2010); it provides significant storage capacity and massive computational capabilities, due to hundreds and sometimes thousands of computers connected to each other.

However, designing a parallel algorithm for big data can be a tedious task if managing topology, communication flows, and data splitting need to be explicitly handled. Recent developments in this area have resulted in platforms and frameworks such as Hadoop platform specifically designed to alleviate the burden of these tasks.

Hadoop system mainly developed by Google; is perceived as a breakthrough of big data analysis (Doulkeridis & Norvag, 2014). It includes map-reduce (MR) as the computational or programming model, and Hadoop Distributed File System (HDFS) as the data storage system. MR breaks the data into small pieces or chunks and distributes them across numerous computational nodes. It is a programming model, which allows the design of parallel and distributed algorithms. It requires defining appropriate Map and Reduce functions. HDFS handles storage issues; including keeping an index about which part of data is residing in which node and replication of data chunks. Hadoop has a built-in fault tolerance and a linear scalability cost (Doulkeridis & Norvag, 2014).

Within this context, we propose, revisiting evolutionary data clustering using the map-reduce model to handle large sets of data. Consequently, data clustering can be easily cast as a global optimization problem (Karaboga & Ozturk, 2011), thereby making the application of evolutionary algorithms (EAs) more evident and appropriate. The number of ways to group data points into clusters grows exponentially with the data size. Furthermore, EAs such as differential evolution has been shown as effective alternatives to solve data clustering problem when the data sets are of moderate size (Suresh et al., 2008 and Abbass & Sarker, 2001). The differential evolutionary approach is among the best evolutionary algorithms; it exhibits good exploration and exploitation search abilities. However, the strategies such as mutation, recombination, and selection make the differential evolutionary time-consuming process. Moreover, the presence of all data points in a single computer is necessary for DE, which is not feasible when dealing with large scale data, and makes the classical DE not-adaptable to large data processing.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing