Cluster Analysis for Outlier Detection

Cluster Analysis for Outlier Detection

Frank Klawonn, Frank Rehm
Copyright: © 2009 |Pages: 5
DOI: 10.4018/978-1-60566-010-3.ch035
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

For many applications in knowledge discovery in databases finding outliers, rare events, is of importance. Outliers are observations, which deviate significantly from the rest of the data, so that it seems they are generated by another process (Hawkins, 1980). Such outlier objects often contain information about an untypical behavior of the system. However, outliers bias the results of many data mining methods like the mean value, the standard deviation or the positions of the prototypes of k-means clustering (Estivill-Castro, 2004; Keller, 2000). Therefore, before further analysis or processing of data is carried out with more sophisticated data mining techniques, identifying outliers is a crucial step. Usually, data objects are considered as outliers, when they occur in a region of extremely low data density. Many clustering techniques like possibilistic clustering (PCM) (Krishnapuram & Keller, 1993; Krishnapuram & Keller, 1996) or noise clustering (NC) (Dave, 1991; Dave & Krishnapuram, 1997) that deal with noisy data and can identify outliers, need good initializations or suffer from lack of adaptability to different cluster sizes (Rehm, Klawonn & Kruse, 2007). Distance-based approaches (Knorr, 1998; Knorr, Ng & Tucakov, 2000) have a global view on the data set. These algorithms can hardly treat data sets containing regions with different data density (Breuning, Kriegel, Ng & Sander, 2000). In this work we present an approach that combines a fuzzy clustering algorithm (Höppner, Klawonn, Kruse & Runkler, 1999) (or any other prototype-based clustering algorithm) with statistical distribution-based outlier detection.
Chapter Preview
Top

Background

Prototype-based clustering algorithms approximate a feature space by means of an appropriate number of prototype vectors where each prototype vector is located in the center of the group of data (the cluster) that belongs to the respective prototype. Clustering usually aims at partitioning a data set into groups or clusters of data where data assigned to the same cluster are similar and data from different clusters are dissimilar. With this partitioning concept in mind, in typical applications of cluster analysis an important aspect is the identification of the number of clusters in a data set. However, when we are interested in identifying outliers, the exact number of clusters is irrelevant (Georgieva & Klawonn, 2006). If one prototype covers two or more data clusters or if two or more prototypes compete for the same data cluster, this is not important as long as the actual outliers are identified and note assigned to a proper cluster. The number of prototypes used for clustering depends of course on the number of expected clusters but also on the distance measure respectively the shape of the expected clusters. Since this information is usually not available, it is often recommended to use the Euclidean distance measure with rather copious prototypes.

One of the most referred statistical tests for outlier detection is the Grubbs’ test (Grubbs, 1969). This test is used to detect outliers in a univariate data set. Grubbs’ test detects one outlier at a time. This outlier is removed from the data set and the test is iterated until no outliers are detected.

The detection of outliers as we propose in this work is a modified version of the one proposed in (Santos-Pereira & Pires, 2002) and is composed of two different techniques. In the first step we partition the data set with the fuzzy c-means clustering algorithm so that the feature space is approximated with an adequate number of prototypes. The prototypes will be placed in the center of regions with a high density of feature vectors. Since outliers are far away from the typical data they influence the placing of the prototypes.

Complete Chapter List

Search this Book:
Reset