Article Preview
TopIntroduction
The recent advent of human sequencing technologies has enabled the retrieval of different kind of biological data with high precision and accuracy at a single base pair (Kundaje et al., 2015). This enormous amount of data has transformed many sciences like biology and medicine from an experimental to quantitative sciences, bringing the hope of new era of computing which will help to unravel the hidden relationship among this data. However, this opportunity is surrounded by many obstacles, e.g., how to manage the complexity of data, how to combine the data from very different resources, and what kind of policies or criteria to be used when handling big data. These obstacles are challenging the boundaries of computer science and high performance computing.
Therefore, the big data requires tailored analysis methods, which are specially designed to meet their properties such as heterogeneity, and size. The size of the data is an important issue for analytics methodologies such as clustering (Costa, 2014). Clustering analysis is a fundamental data analysis approach used for knowledge discovery (Costa, 2014); it aims to organize data points into clusters that exhibit a high internal similarity to each other. An up-to-date review of the practical applications of data clustering can be found in (Nanda & Panda, 2014); most of these algorithms are becoming inappropriate due to their conventional assumptions about data. Indeed, the current datasets that need to be processed are big, heterogeneous, and more complex than what they used to be in the last decades. Recently, these challenging issues known as big data have motivated the design of new methods and algorithms (Kristensen et al. 2014); these methods are mainly based on parallel and distributed computing models and make use of dedicated platforms. Undeniably, a platform for parallel and distributed computing is a requirement to handle big data analysis (Sarkar et al., 2010); it provides significant storage capacity and massive computational capabilities, due to hundreds and sometimes thousands of computers connected to each other.
However, designing a parallel algorithm for big data can be a tedious task if managing topology, communication flows, and data splitting need to be explicitly handled. Recent developments in this area have resulted in platforms and frameworks such as Hadoop platform specifically designed to alleviate the burden of these tasks.
Hadoop system mainly developed by Google; is perceived as a breakthrough of big data analysis (Doulkeridis & Norvag, 2014). It includes map-reduce (MR) as the computational or programming model, and Hadoop Distributed File System (HDFS) as the data storage system. MR breaks the data into small pieces or chunks and distributes them across numerous computational nodes. It is a programming model, which allows the design of parallel and distributed algorithms. It requires defining appropriate Map and Reduce functions. HDFS handles storage issues; including keeping an index about which part of data is residing in which node and replication of data chunks. Hadoop has a built-in fault tolerance and a linear scalability cost (Doulkeridis & Norvag, 2014).
Within this context, we propose, revisiting evolutionary data clustering using the map-reduce model to handle large sets of data. Consequently, data clustering can be easily cast as a global optimization problem (Karaboga & Ozturk, 2011), thereby making the application of evolutionary algorithms (EAs) more evident and appropriate. The number of ways to group data points into clusters grows exponentially with the data size. Furthermore, EAs such as differential evolution has been shown as effective alternatives to solve data clustering problem when the data sets are of moderate size (Suresh et al., 2008 and Abbass & Sarker, 2001). The differential evolutionary approach is among the best evolutionary algorithms; it exhibits good exploration and exploitation search abilities. However, the strategies such as mutation, recombination, and selection make the differential evolutionary time-consuming process. Moreover, the presence of all data points in a single computer is necessary for DE, which is not feasible when dealing with large scale data, and makes the classical DE not-adaptable to large data processing.