Article Preview
Top1. Introduction
As an important research method in data mining, clustering is centered on unsupervised learning using appropriate measuring rules based on data characteristics. Hidden clusters that represent implicit patterns or connotative information can be identified by partitioning data into sub-sets (Rostami et al., 2017). A whole host of clustering algorithm have been proposed and most of them are inter-connected and overlapped (Mazzeo, Masciari, & Zaniolo, 2017). Based on the form of the clustering results, these algorithms can be divided into two categories: hierarchical method (Kamis, Chiclana, & Levesley, 2018) and partitioning method (Berkhin, 2006). Algorithm based on hierarchical method aim to form clusters by decomposing a dataset into multiple layers. Data objects will gather into various clusters at different layers, for which the same data object can belong to different clusters (Zhang, Zhang, Zhang, & Wang, 2018). Algorithm based on partitioning method focus on dividing data objects into multiple mutually exclusive clusters, where the same data object can only belongs to a single cluster (Celebi, 2008). In this way, the characteristics of each cluster set are clearly presented by different metrics (Yue, Wang, Wang, & Bao, 2016).
K-medoids algorithm, being a well-known partitioning clustering algorithm, it is popular and widely used due to its simplicity and robustness. Its main idea can be described as follows (Choi & Chung, 2017): firstly, k actual data objects are randomly selected from the dataset as initial cluster centers. Then, the remaining data objects will be divided into k clusters based on the principle of minimize the sum of dissimilarities within each cluster. Lastly, in order to achieve the clustering standard that intra-cluster objects are highly similar to each other while inter-cluster objects are dissimilar to the greatest extent, the PAM strategy is used for relocation and iterative optimization based on the given clustering rules such as distance dissimilarity metric.
Compared with other partitioning clustering algorithms which use mean value as cluster centroids, such as K-means algorithm, taking actual data object as cluster center can effectively decline the clustering sensitivity to extreme, noisy or missing value (Yang, Ma, Zhang, Li, & Zhang, 2017). However, PAM algorithm is inevitable burdened by the higher algorithm complexity due to its traversing center point optimization approach. In particular, the applicability and flexibility of the algorithm will significantly degraded when it is used to process large-scale and multi-cluster datasets (Marques, Neto, & Ferreira, 2016). Thus, current research based on k-medoids algorithm are focus on how to reduce the algorithm complexity, whilst maintaining its strong robustness.
TopOver the last years, studies on the K-medoids clustering algorithm improvement can be roughly divided into the following types.
2.1. Optimizing Initial Parameter Selection
Studies of this type aim to overcome the random influence on relocation process by selecting better initial parameters based on distance (Yu, Liu, Guo, & Liu, 2017), density (Pardeshi & Toshniwal, 2010; Rodriguez & Laio, 2014), and fuzzy metric (Kotinis, 2014) as well as swarm intelligence algorithms (Khatami, Mirghasemi, Khosravi, Lim, & Nahavandi, 2017; X. Y. Lai, Gong, & Han, 2017). When dealing with small-scale or sparse clustered datasets, high-quality initial parameters can effectively reduce the number of iterations and significantly enhance the algorithm efficiency. However, as the data scale or the number of expected clusters increases, the flexibility for this improvement will decline sharply.