A Parallel Fractional Lion Algorithm for Data Clustering Based on MapReduce Cluster Framework

A Parallel Fractional Lion Algorithm for Data Clustering Based on MapReduce Cluster Framework

Satish Chander, P. Vijaya, Praveen Dhyani
Copyright: © 2022 |Pages: 25
DOI: 10.4018/IJSWIS.297034
Article PDF Download
Open access articles are freely available for download

Abstract

This work introduces a parallel clustering algorithm by modifying the existing Fractional Lion Algorithm (FLA). The proposed work replaces the conventional Euclidean distance measure with the Bhattacharya distance measure to newly propose the improved FLA (IMR-FLA). The proposed IMR-FLA is implemented in both the mapper and the reducer in the MapReduce framework to achieve the parallel clustering. The experimentation of the proposed IMR-FLA is done by using six standard databases, namely Pima Indian diabetes dataset, Heart disease dataset, Hepatitis dataset, localization dataset, breast cancer dataset, and skin segmentation dataset, from the UCI repository. The proposed IMR-FLA has the overall improved Jaccard coefficient value of 0.9357, 0.6572, 0.7462, 0.5944, 0.9418, and 0.8680, for each dataset. Similarly, the proposed IMR-FLA algorithm has outclassed other classifiers' performance with the clustering accuracy value of 0.9674, 0.9471, 0.9677, 0.777, 0.9023, and 0.9585, respectively, for the experimental databases.
Article Preview
Top

1. Introduction

The evolution of big data has been the trend nowadays since various communities share data in internet sources. The huge flow of data on the Internet has given rise to various data mining techniques, out of which data clustering and classification have been on the trend. Manual processing of large data from various sources can lead to error, and hence, automation of the data processing scheme is an emerging topic this decade. Big data contains the data from the various domains, and hence, the clustering of the information concerning their domains is necessary for retrieving the information (Gowanlock, et al., 2017). The various applications, such as image segmentation, data mining, biomedical and information retrieval, require data clustering (Rahnema, et al., 2020). Besides, it is widely used in Internet of things (IoT) device applications and related services (B.B. Gupta, and Megha Quamara, 2018). It requires a suitable encryption technique for the authentication of the information sharing (Christian et al., 2021) (Anupama et al., 2021), (C. Yu et al, 2018). Analytic processing from the large data domains, such as science and commercial application, possesses various challenges to the parallel clustering schemes due to their computational complexity and storage (Amintoosi, et al., 2020). Clustering of the data improves the knowledge discovery from the large volume of data (Zhou and Yang, 2020). Clustering is one of the important data mining schemes, which helps the user to retrieve the data from a large volume of the data more effectively by considering the load characteristic curves. The clustering technique groups the data belonging to the same cluster by calculating the distance measure (Kaur and Kumar, 2021). The cluster groups formed by the clustering algorithm can be grouped into homogeneous and heterogeneous clusters (Sreedhar, et al., 2017).

Building the clustering algorithms in the parallel stream has significant challenges since parallel processing is necessary to build the multiprocessor hardware system with specialized chips (Tripathi, et al, 2020). It is also widely used in recommendations of students, which motivates further studies (T.T.H. Bui, et al., 2021) (N. T. Hung, 2020) (N.T. Hung, and J.C. Chang, 2019). It makes these algorithms utilize the high-speed computer systems effectively. Literature has classified the clustering schemes as hierarchal clustering and partitional clustering (Xua, et al., 2020). The clustering methods, like squared-error methods, are categorized under the partitional clustering algorithms, while the techniques, such as the Complete-link method and single-link method, are hierarchical clustering. Normally, the partitional clustering algorithm takes the pattern matrix from the data as the training input. From the data with large volumes, it isn't easy to generate the pattern matrix for each incoming data, and hence, parallel processing of the pattern matrix improves the clustering process. In parallel clustering, the distance measure between the data points and the cluster center is calculated. Thus, the similarity between the large volumes of the data is achieved (Sharma and Seal, 2020). Properties, such as continuous streaming and the large volume of the data, can be solved using the MapReduce framework with the clustering algorithm. The MapReduce was developed by (Dean & Ghemawat, 2008) at Google to process large data continuously. Incorporating the MapReduce framework with the clustering algorithm increases the strength of the clustering process and makes the algorithm suitable for automatic parallelism and distribution. Besides, the MapReduce concept makes the clustering algorithm to be fault-tolerant.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 1 Issue (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing