On Clustering Techniques

On Clustering Techniques

Sheng Ma, Tao Li
Copyright: © 2009 |Pages: 5
DOI: 10.4018/978-1-60566-010-3.ch043
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Clustering data into sensible groupings, as a fundamental and effective tool for efficient data organization, summarization, understanding and learning, has been the subject of active research in several fields such as statistics (Jain & Dubes, 1988; Hartigan, 1975), machine learning (Dempster, Laird & Rubin, 1977), Information theory (Linde, Buzo & Gray, 1980), databases (Guha, Rastogi & Shim, 1998; Zhang, Ramakrishnan & Livny, 1996) and Bioinformatics (Cheng & Church, 2000) from various perspectives and with various approaches and focuses. From application perspective, clustering techniques have been employed in a wide variety of applications such as customer segregation, hierarchal document organization, image segmentation, microarray data analysis and psychology experiments. Intuitively, the clustering problem can be described as follows: Let W be a set of n entities, finding a partition of W into groups such that the entities within each group are similar to each other while entities belonging to different groups are dissimilar. The entities are usually described by a set of measurements (attributes). Clustering does not use category information that labels the objects with prior identifiers. The absence of label information distinguishes cluster analysis from classification and indicates that the goals of clustering is just finding a hidden structure or compact representation of data instead of discriminating future data into categories.
Chapter Preview
Top

Background

Generally clustering problems are determined by five basic components:

  • Data representation: What’s the (physical) representation of the given data set? What kind of attributes (e.g., numerical, categorical or ordinal)?

  • Data generation: The formal model for describing the generation of the data set. For example, Gaussian mixture model is a model for data generation.

  • Criterion/objective function: What are the objective functions or criteria that the clustering solutions should aim to optimize? Typical examples include entropy, maximum likelihood and within-class or between-class distance (Li, Ma & Ogihara, 2004a).

  • Optimization procedure: What is the optimization procedure for finding the solutions? Clustering problem is known to be NP-complete (Brucker, 1977) and many approximation procedures have been developed. For instance, Expectation-Maximization (EM) type algorithms have been widely used to find local minima of optimization.

  • Cluster validation and interpretation: Cluster validation evaluates the clustering results and judges the cluster structures. Interpretation is often necessary for applications. Since there is no label information, clusters are sometimes justifies by ad hoc methods (such as exploratory analysis) based on specific application areas.

For a given clustering problem, the five components are tightly coupled. The formal model is induced from the physical representation of the data, the formal model along with the objective function determines the clustering capability and the optimization procedure decides how efficiently and effectively the clustering results can be obtained. The choice of the optimization procedure depends on the first three components. Validation of cluster structures is a way of verifying assumptions on data generation and evaluating the optimization procedure.

Top

Main Thrust

We review some of current clustering techniques in the section. Figure 1 gives the summary of clustering techniques. The following further discusses traditional clustering techniques, spectral based analysis, model-based clustering and co-clustering.

Figure 1.

Summary of clustering techniques

978-1-60566-010-3.ch043.f01

Complete Chapter List

Search this Book:
Reset