Exploring the Unknown Nature of Data: Cluster Analysis and Applications

Exploring the Unknown Nature of Data: Cluster Analysis and Applications

Rui Xu (Missouri University of Science and Technology, USA) and Donald C. Wunsch II (Missouri University of Science and Technology, USA)
DOI: 10.4018/978-1-60566-766-9.ch001
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

To classify objects based on their features and characteristics is one of the most important and primitive activities of human beings. The task becomes even more challenging when there is no ground truth available. Cluster analysis allows new opportunities in exploring the unknown nature of data through its aim to separate a finite data set, with little or no prior information, into a finite and discrete set of “natural,” hidden data structures. Here, the authors introduce and discuss clustering algorithms that are related to machine learning and computational intelligence, particularly those based on neural networks. Neural networks are well known for their good learning capabilities, adaptation, ease of implementation, parallelization, speed, and flexibility, and they have demonstrated many successful applications in cluster analysis. The applications of cluster analysis in real world problems are also illustrated. Portions of the chapter are taken from Xu and Wunsch (2008).
Chapter Preview
Top

Introduction

To classify objects based on their features and characteristics is one of the most important and primitive activities of human beings. Objects displaying similar features and properties based on certain pre-specified criteria are classified into the same group or category. The properties of a specific new object can then be inferred using this classification information. For example, when we see a cat, we know immediately that it can climb trees and likes eating fish without really seeing it do so. This task becomes even more challenging when there is no ground truth available. Cluster analysis, also known as unsupervised classification or exploratory data analysis, aims to address this problem and explores the unknown nature of data through separating a finite data set, with little or no prior information, into a finite and discrete set of “natural,” hidden data structures (Everitt et al., 2001; Hartigan, 1975; Jain and Dubes, 1988).

Clustering focuses on the partition of data objects (patterns, entities, instances, observances, units) into a certain number of clusters (groups, subsets, or categories). However, there is no universally agreed upon and precise definition of the term cluster. Most researchers describe a cluster in terms of internal homogeneity and external separation (Everitt et al., 2001; Hansen and Jaumard, 1997; Jain and Dubes, 1988). Data objects that are in the same cluster are required to be similar to each other, while data objects belonging to different clusters should display sufficient dissimilarity. Here, we provide simple mathematical descriptions of two types of clustering, known as partitional and hierarchical clustering, based on the discussion in Hansen and Jaumard (1997).

Given a set of input patterns X={x1,…,xj,…,xN}, where xj=(xj1, xj2,…,xjd)∈ℜd, with each measure xji called a feature (attribute, dimension, or variable):

  • 1.

    Hard partitional clustering attempts to seek a K-partition of X, C={C1,…,CK} (KN), such that

    • ; (1)

    • ; (2)

    • . (3)

  • 2.

    Hierarchical clustering attempts to construct a tree-like nested structure partition of X, H={H1,…,HQ} (QN), such that CiHm, CjHl, and m>l imply CiCj or CiCj=ϕ for all i, ji, m, l=1,…,Q.

As shown in Eq. 3, each data object is only associated with a single cluster. In some cases, it may also be possible that an object is related to several clusters. This can be achieved by allowing the object to belong to all K clusters with a degree of membership, ui,j∈[0,1], which represents the membership coefficient of the jth object in the ith cluster and satisfies the following two constraints (Zadeh, 1965):, (4) and

Key Terms in this Chapter

Adaptive Resonance Theory: Adaptive resonance theory is a learning theory hypothesizing that resonance in neural circuits can trigger fast learning. It was developed as a solution to the stability-plasticity dilemma and can learn arbitrary input patterns in a stable, fast, and self-organizing way.

Document Clustering: Document clustering is the organization of a large amount of text documents into a small number of meaningful clusters, where each cluster represents a specific topic.

Partitional Clustering: Partitional clustering directly divides data objects into some pre-specified number of clusters without the hierarchical structure.

Hierarchical Clustering: Hierarchical clustering (HC) organizes data with a sequence of nested partitions, either from singleton clusters to a cluster including all individuals or vice versa.

Traveling Salesman Problem: Given a complete undirected graph G, where each edge between a pair of vertices is weighted with a non-negative integer cost, the common form of the travelling salesman problem is equivalent to finding the shortest Hamiltonian cycle, which is a tour over G that begins and ends at the same vertex and visits other vertices exactly once.

Self-Organizing Feature Maps: Self-organizing feature maps belong to the category of soft competitive learning clustering algorithms and aim to represent high-dimensional input patterns with prototype vectors that can be visualized in a two-dimensional lattice structure or one-dimensional linear structure, while preserving the proximity relationships of the original data as much as possible.

Group Technology: Group technology refers to the study and implementation of information retrieval systems that can retrieve, store, and compare designs.

Neural Gas: Neural gas belongs to the class of self-organizing neural networks and is capable of adaptively determining the updating of the neighborhood by using a neighborhood ranking of the prototype vectors within the input space.

Complete Chapter List

Search this Book:
Reset