Compression-Based Data Mining

Compression-Based Data Mining

Eamonn Keogh, Li Keogh, John C. Handley
Copyright: © 2009 |Pages: 8
DOI: 10.4018/978-1-60566-010-3.ch045
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Compression-based data mining is a universal approach to clustering, classification, dimensionality reduction, and anomaly detection. It is motivated by results in bioinformatics, learning, and computational theory that are not well known outside those communities. It is based on an easily computed compression dissimilarity measure (CDM) between objects obtained by compression. The basic concept is easy to understand, but its foundations are rigorously formalized in information theory. The similarity between any two objects (XML files, time series, text strings, molecules, etc.) can be obtained using a universal lossless compressor. The compression dissimilarity measure is the size of the compressed concatenation of the two objects divided by the sum of the compressed sizes of each of the objects. The intuition is that if two objects are similar, lossless compressor will remove the redundancy between them and the resulting size of the concatenated object should be close the size of the larger of the two compressed constituent objects. The larger the CDM between two objects, the more dissimilar they are. Classification, clustering and anomaly detection algorithms can then use this dissimilarity measure in a wide variety of applications. Many of these are described in (Keogh et al., 2004), (Keogh et al. 2007), and references therein. This approach works well when (1) objects are large and it is computationally expensive to compute other distances (e.g., very long strings); or (2) there are no natural distances between the objects or none that are reasonable from first principles. CDM is “parameter-free” and thus avoids over-fitting the data or relying upon assumptions that may be incorrect (Keogh et al., 2004).
Chapter Preview
Top

Introduction

Compression-based data mining is a universal approach to clustering, classification, dimensionality reduction, and anomaly detection. It is motivated by results in bioinformatics, learning, and computational theory that are not well known outside those communities. It is based on an easily computed compression dissimilarity measure (CDM) between objects obtained by compression. The basic concept is easy to understand, but its foundations are rigorously formalized in information theory. The similarity between any two objects (XML files, time series, text strings, molecules, etc.) can be obtained using a universal lossless compressor. The compression dissimilarity measure is the size of the compressed concatenation of the two objects divided by the sum of the compressed sizes of each of the objects. The intuition is that if two objects are similar, lossless compressor will remove the redundancy between them and the resulting size of the concatenated object should be close the size of the larger of the two compressed constituent objects. The larger the CDM between two objects, the more dissimilar they are.

Classification, clustering and anomaly detection algorithms can then use this dissimilarity measure in a wide variety of applications. Many of these are described in (Keogh et al., 2004), (Keogh et al. 2007), and references therein. This approach works well when (1) objects are large and it is computationally expensive to compute other distances (e.g., very long strings); or (2) there are no natural distances between the objects or none that are reasonable from first principles. CDM is “parameter-free” and thus avoids over-fitting the data or relying upon assumptions that may be incorrect (Keogh et al., 2004).

CDM enjoys the following properties:

  • 1.

    Because it makes no distributional or modeling assumptions about the data, it allows true exploratory data mining.

  • 2.

    The accuracy of CDM is often greatly superior to those of parameter-laden or model-based algorithms, even if we allow these algorithms to search exhaustively over their parameter spaces.

  • 3.

    CDM uses compression algorithms which are typically space and time efficient. As a consequence, CDM can be much more efficient than other algorithms, in some cases by three or four orders of magnitude.

  • 4.

    CDM makes no assumption about the format of the data, nor does it require extensive data cleaning to be effective.

Top

Background

The use of data compression to classify sequences is also closely related to the Minimum Description Length (MDL) and Minimum Message Length (MML) principles (Grünwald, 2007), (Wallace, 2005). See keyword definitions at the end of the article. TheMDL/MMLprinciple has generated an extensive body of literature in the data mining community. CDM is a related concept, but it requires no probabilistic concepts and can be universally applied.

Complete Chapter List

Search this Book:
Reset