An Improvement of K-Medoids Clustering Algorithm Based on Fixed Point Iteration

An Improvement of K-Medoids Clustering Algorithm Based on Fixed Point Iteration

Xiaodi Huang, Minglun Ren, Zhongfeng Hu
Copyright: © 2020 |Pages: 11
DOI: 10.4018/IJDWM.2020100105
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The process of K-medoids algorithm is that it first selects data randomly as initial centers to form initial clusters. Then, based on PAM (partitioning around medoids) algorithm, centers will be sequential replaced by all the remaining data to find a result has the best inherent convergence. Since PAM algorithm is an iterative ergodic strategy, when the data size or the number of clusters are huge, its expensive computational overhead will hinder its feasibility. The authors use the fixed-point iteration to search the optimal clustering centers and build a FPK-medoids (fixed point-based K-medoids) algorithm. By constructing fixed point equations for each cluster, the problem of searching optimal centers is converted into the solving of equation set in parallel. The experiment is carried on six standard datasets, and the result shows that the clustering efficiency of proposed algorithm is significantly improved compared with the conventional algorithm. In addition, the clustering quality will be markedly enhanced in handling problems with large-scale datasets or a large number of clusters.
Article Preview
Top

1. Introduction

As an important research method in data mining, clustering is centered on unsupervised learning using appropriate measuring rules based on data characteristics. Hidden clusters that represent implicit patterns or connotative information can be identified by partitioning data into sub-sets (Rostami et al., 2017). A whole host of clustering algorithm have been proposed and most of them are inter-connected and overlapped (Mazzeo, Masciari, & Zaniolo, 2017). Based on the form of the clustering results, these algorithms can be divided into two categories: hierarchical method (Kamis, Chiclana, & Levesley, 2018) and partitioning method (Berkhin, 2006). Algorithm based on hierarchical method aim to form clusters by decomposing a dataset into multiple layers. Data objects will gather into various clusters at different layers, for which the same data object can belong to different clusters (Zhang, Zhang, Zhang, & Wang, 2018). Algorithm based on partitioning method focus on dividing data objects into multiple mutually exclusive clusters, where the same data object can only belongs to a single cluster (Celebi, 2008). In this way, the characteristics of each cluster set are clearly presented by different metrics (Yue, Wang, Wang, & Bao, 2016).

K-medoids algorithm, being a well-known partitioning clustering algorithm, it is popular and widely used due to its simplicity and robustness. Its main idea can be described as follows (Choi & Chung, 2017): firstly, k actual data objects are randomly selected from the dataset as initial cluster centers. Then, the remaining data objects will be divided into k clusters based on the principle of minimize the sum of dissimilarities within each cluster. Lastly, in order to achieve the clustering standard that intra-cluster objects are highly similar to each other while inter-cluster objects are dissimilar to the greatest extent, the PAM strategy is used for relocation and iterative optimization based on the given clustering rules such as distance dissimilarity metric.

Compared with other partitioning clustering algorithms which use mean value as cluster centroids, such as K-means algorithm, taking actual data object as cluster center can effectively decline the clustering sensitivity to extreme, noisy or missing value (Yang, Ma, Zhang, Li, & Zhang, 2017). However, PAM algorithm is inevitable burdened by the higher algorithm complexity due to its traversing center point optimization approach. In particular, the applicability and flexibility of the algorithm will significantly degraded when it is used to process large-scale and multi-cluster datasets (Marques, Neto, & Ferreira, 2016). Thus, current research based on k-medoids algorithm are focus on how to reduce the algorithm complexity, whilst maintaining its strong robustness.

Top

Over the last years, studies on the K-medoids clustering algorithm improvement can be roughly divided into the following types.

2.1. Optimizing Initial Parameter Selection

Studies of this type aim to overcome the random influence on relocation process by selecting better initial parameters based on distance (Yu, Liu, Guo, & Liu, 2017), density (Pardeshi & Toshniwal, 2010; Rodriguez & Laio, 2014), and fuzzy metric (Kotinis, 2014) as well as swarm intelligence algorithms (Khatami, Mirghasemi, Khosravi, Lim, & Nahavandi, 2017; X. Y. Lai, Gong, & Han, 2017). When dealing with small-scale or sparse clustered datasets, high-quality initial parameters can effectively reduce the number of iterations and significantly enhance the algorithm efficiency. However, as the data scale or the number of expected clusters increases, the flexibility for this improvement will decline sharply.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 6 Issues (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing