MapReduce-Based Crow Search-Adopted Partitional Clustering Algorithms for Handling Large-Scale Data

MapReduce-Based Crow Search-Adopted Partitional Clustering Algorithms for Handling Large-Scale Data

Karthikeyani Visalakshi N., Shanthi S., Lakshmi K.
DOI: 10.4018/IJCINI.20211001.oa32
Article PDF Download
Open access articles are freely available for download

Abstract

Cluster analysis is the prominent data mining technique in knowledge discovery and it discovers the hidden patterns from the data. The K-Means, K-Modes and K-Prototypes are partition based clustering algorithms and these algorithms select the initial centroids randomly. Because of its random selection of initial centroids, these algorithms provide the local optima in solutions. To solve these issues, the strategy of Crow Search algorithm is employed with these algorithms to obtain the global optimum solution. With the advances in information technology, the size of data increased in a drastic manner from terabytes to petabytes. To make proposed algorithms suitable to handle these voluminous data, the phenomena of parallel implementation of these clustering algorithms with Hadoop Mapreduce framework. The proposed algorithms are experimented with large scale data and the results are compared in terms of cluster evaluation measures and computation time with the number of nodes.
Article Preview
Top

1. Introduction

Clustering is the unsupervised classification technique that extracts useful knowledge from the data without knowing their class labels. The main objective of clustering is that the data objects within a group are similar to one another and dissimilar from the data objects between the clusters. Clustering can be applied in different application domains such as image processing (Lei, Wang, Peng, & Yang, 2011), bioinformatics (Bhattacharya & De, 2010), document clustering (Jun, Park, & Jang, 2014), information retrieval (Chan, 2008) and healthcare (Güneş, Polat, & Yosunkaya, 2010).

Clustering algorithms are broadly divided into two categories: partitional and hierarchical. The partitional clustering algorithms group the data objects into a predefined number of clusters and the hierarchical clustering algorithms group the data objects on the basis of tree like structure using either the bottom-up or top-down approach. The K-Means, K-Modes and K-Prototypes are partition based clustering algorithms and these algorithms handle the numeric, categorical and mixing of numeric and categorical data objects respectively. K-Means is one of the most widely used partitional clustering algorithms to handle numerical data. This algorithm is extended to handle the categorical, mixed numeric and categorical types of data. These algorithms are called as K-Modes and K-Prototypes (Huang, 1998, 1997). While these algorithms are very fast and simple, they have some drawbacks. Firstly, the performance heavily depends on the selection of initial centroids and secondly, objective function values contain the local minima. To defeat these issues, various optimization algorithms are proposed and some of those algorithms are literature surveyed. Some of these optimization algorithms include the Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Ant Colony Optimization (ACO), Firefly Algorithm (FA) and Cuckoo Search (CS).

Crow Search Algorithm (Askarzadeh, 2016) is a population based meta-heuristic optimization algorithm that stimulates the intelligent behaviour of the crows. Crows protect their leftover food in hidden places and rescue it whenever needed. This algorithm is based on finding the hidden storage position of excess food of crows. Finding a hidden food source of another crow is not an easy task because if a crow finds any one following, it fools the other crow by moving to the random position.

Recent advances in the technologies, the size of data increases everyday and finding useful information from that is a tedious task. To deal with this, recent technologies are incorporated with traditional algorithms to improve performance. Apache Hadoop is an open source framework and it is used for processing large scale data. This framework processes the data objects in a parallel manner using the MapReduce.

MapReduce (Dean & Ghemawat, 2008) is a programming model for processing large scale data. Hadoop stores the data in Hadoop Distributed File System (HDFS) and it is designed to handle the very large data files running on clusters of commodity hardware. MapReduce is a programming model for data processing and these programs are inherently parallel, thus putting very large scale data for analysis into the hands of anyone with enough machines at their disposal. MapReduce works by breaking the processing into two phases: the map phase and the reduce phase. Each phase has key-value pairs as input and output, the types of which may be chosen by the programmer. The programme needs to specify two functions the map function and the reduce function.

The research gap from the existing works are based on either the K-Means, K-Modes and K-Prototypes or the optimization algorithmsare implemented in a Hadoop MapReduce framework or in a Spark for handling very large scale data. Similarly, the global optimization algorithms try to resolve the local optimum insolutions, but they suffer from lowquality results andlow convergence speed, complicated operators, complexstructure and parameter setting issues.

Complete Article List

Search this Journal:
Reset
Volume 18: 1 Issue (2024)
Volume 17: 1 Issue (2023)
Volume 16: 1 Issue (2022)
Volume 15: 4 Issues (2021)
Volume 14: 4 Issues (2020)
Volume 13: 4 Issues (2019)
Volume 12: 4 Issues (2018)
Volume 11: 4 Issues (2017)
Volume 10: 4 Issues (2016)
Volume 9: 4 Issues (2015)
Volume 8: 4 Issues (2014)
Volume 7: 4 Issues (2013)
Volume 6: 4 Issues (2012)
Volume 5: 4 Issues (2011)
Volume 4: 4 Issues (2010)
Volume 3: 4 Issues (2009)
Volume 2: 4 Issues (2008)
Volume 1: 4 Issues (2007)
View Complete Journal Contents Listing