Hybrid Partitioning-Density Algorithm for K-Means Clustering of Distributed Data Utilizing OPTICS

Hybrid Partitioning-Density Algorithm for K-Means Clustering of Distributed Data Utilizing OPTICS

Mikołaj Markiewicz, Jakub Koperwas
Copyright: © 2019 |Pages: 20
DOI: 10.4018/IJDWM.2019100101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The authors present the first clustering algorithm for use with distributed data that is fast, reliable, and does not make any presumptions in terms of data distribution. The authors' algorithm constructs a global clustering model using small local models received from local clustering statistics. This approach outperforms the classical non-distributed approaches since it does not require downloading all of the data to the central processing unit. The authors' solution is a hybrid algorithm that uses the best partitioning and density-based approach. The proposed algorithm handles uneven data dispersion without a transfer overload of additional data. Experiments were carried out with large datasets and these showed that the proposed solution introduces no loss of quality compared to non-distributed approaches and can achieve even better results, approaching reference clustering. This is an excellent outcome, considering that the algorithm can only build a model from fragmented data where the communication cost between nodes is negligible.
Article Preview
Top

Introduction

We live in an era in which the amount of data generated globally in two days can reach 1.8ZB (where a ZB is 1021 B) (Chen et al., 2014). The analysis of large and multi-aspect datasets can enable data-driven decision making, thus improving certain business models for companies, allowing for important insights about our society, helping in the invention of new drugs, etc. This data, which needs to be analyzed, is not only generated in large volumes but is originating from heterogeneous sources of data - in terms of volume, velocity, variety (Zikopoulos & Eaton, 2011) and very often from distributed data sources (Takaishi et al., 2014). Analysis of such data may encounter several obstacles, such as hardware, software and network bottlenecks (Chen et al., 2014). Hardware limitations are now often addressed using MapReduce frameworks; however, in the near future, a collection of all the data to be analyzed into one place may become impossible, due to privacy and network capacity issues. For example, a single building in a smart city can generate as much as 275 GB per day, meaning that a small city can generate millions of GB per day (Cisco, 2018). Hence, development is needed for data analysis methods that work on distributed data and that can enable analysis without the need to transfer the data. Distributed processing is a natural method for improving the performance and overall quality of processing for large volumes of data and it is superior to local processing. Nevertheless, in order to make distributed processing effective and practical, we can only operate on raw data using small models; the transfer of raw data is impractical due to transfer overload and violates data privacy on the network (Jagannathan et al., 2006; Sheikhalishahi & Martinelli, 2017; Kantarcioglu & Clifton, 2004). In this article, we address the widely known data mining problem of clustering the distributed data.

Clustering large amounts of data located in distributed data centers have become a challenging issue (Chen et al., 2014; Takaishi et al., 2014; Hashem et al., 2015). Every study [e.g. (Berta et al., 2014; Datta et al., 2006)] of the clustering problem in large datasets has confirmed that the communication cost must not outweigh the potential profit of such processing. One of the possible approaches to the communication-neutral clustering of distributed data involves constructing a global clustering model using small local models received from local clustering statistics (Januzaj et al., 2004). This approach outperforms the classical non-distributed approaches since it avoids the need to download an entire dataset to the central processing unit. This task is not trivial, as it may cause a significant loss of clustering quality and is highly susceptible to uneven data dispersion. However, in addition to the communication cost, the quality of the results must not be significantly worse than locally performed processing, as shown in (Berta et al., 2014; Datta et al., 2006).

The very few approaches that have so far been presented in the literature focus on performance and the quality of the results. Moreover, most approaches focus on partitioning methods, as these are easier to apply to distributed data than density-based algorithms. However, distributed partitioning methods are easily stymied when the expected number of clusters differs from those found on local nodes and do not perform well under conditions of uneven data dispersion. Therefore, we propose a distributed hybrid algorithm that uses the best aspects of both partitioning and density-based approaches. This hybrid approach shows significantly better-quality performance and is much easier to use since it allows for flexibility in specifying the final number of clusters over the entire distributed dataset, which can be adjusted by analyzing the density features of local clustering results.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 6 Issues (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing