Survey of Clustering: Algorithms and Applications

Survey of Clustering: Algorithms and Applications

Raymond Greenlaw, Sanpawat Kantabutra
Copyright: © 2013 |Pages: 29
DOI: 10.4018/ijirr.2013040101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This article is a survey into clustering applications and algorithms. A number of important well-known clustering methods are discussed. The authors present a brief history of the development of the field of clustering, discuss various types of clustering, and mention some of the current research directions in the field of clustering. More specifically, top-down and bottom-up hierarchical clustering are described. Additionally, K-Means and K-Medians clustering algorithms are also shown. The concept of representative points is introduced and the technique of discovering them is presented. Immense data sets in clustering often necessitate parallel computation. The authors discuss issues involving parallel clustering as well. Clustering deals with a large number of experimental results. The authors provide references to these works throughout the article. A table for comparing various clustering methods is given in the end. The authors give a summary and an extensive list of references, including some of the latest works in the field, to conclude the article.
Article Preview
Top

1. Basics Of Clustering

1.1. Introduction

Undeniably, information has become increasingly indispensable in our lives. The widespread use of mobile phones, web-based applications, and the Internet has made it impossible to live without such information. However, with the growing use of information comes greater complexity of data and the difficulty of information management. The immense data appears to be limitless not only in quantity but also in dimensions. Some real-life applications can generate high-dimensional data that are not easy to manage or analyze. For example, in genomics, a single gene expression profile could yield a vector of measurements whose dimension is in the range between 5,000 and 25,000 (Bühlmann, 2006). Some authors in existing literature have dealt with these data effectively while others have met with less success. In any case, new methods have constantly been discovered and employed to manage the ever-growing data and, in some cases, these new methods have helped improve the existing methods.

Ideally we want to keep, extract, and maintain information, as opposed to the data itself. In addition, we want to have smaller quantities of useful information to simplify processing and decision making. One way to manage a data set is to group data that have certain similar characteristics; this method is called clustering (Berkhin, 2002; Dubes, 1993; Everitt, 1993; Fasulo, 1999; Ghosh, 2002; Han, Kamber, & Tung, 2001; Hartigan, 1975; Jain & Dubes, 1988; Jain, Murty, & Flynn, 1999; Kaufman & Rousseeuw, 1990; Kolatch, 2001; Mirkin, 1996; Spath, 1980). Clustering allows us to work with a small, manageable set of groups and facilitate data processing. In this article we will explore the topic of clustering. We provide an introduction to this subject, as well as describe various algorithms and applications of clustering. We begin with a definition of clustering.

1.2. Clustering Definition

Intuitively, clustering is a grouping of “similar” objects, where similarity is some predetermined function. More formally, given a set of n objects, the process of clustering partitions the set into unique subsets of objects such that each subset shares specific common characteristics. The common characteristics are usually specified in terms of some mathematical relation. Geometrically speaking, the objects can be viewed as points in some d-dimensional space. Clustering partitions these points into groups, where points in the same group are located near one another in space.

Figure 1 illustrates the partitioning of a set of points in two-dimensional space into four clusters. Each cluster is represented by a rectangle. Points in the same cluster are somewhat closer―more similar in terms of their Euclidean distance to one another than to points in other clusters. The number of points in each cluster may or may not be perfectly balanced. The clusters have sizes from left-to-right, top-to-bottom of 33, 50, 23, and 16. However, rather than talking about the 122 points separately, we can talk about just four clusters. In other words, we may view each cluster as a representative of its members. In base two this is almost a five-order magnitude decrease in the number of objects to consider. For large datasets the reduction in complexity of the number of items to discuss can be much greater still.

Figure 1.

A set of points that is partitioned into four subsets

ijirr.2013040101.f01

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024)
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 3 Released, 1 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing