Comparative Study of Incremental Learning Algorithms in Multidimensional Outlier Detection on Data Stream

Comparative Study of Incremental Learning Algorithms in Multidimensional Outlier Detection on Data Stream

Simon Fong, Dong Han, Athanasios V. Vasilakos
DOI: 10.4018/978-1-4666-8513-0.ch004
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Multi-dimensional outlier detection (MOD) over data streams is one of the most significant data stream mining techniques. When multivariate data are streaming in high speed, outliers are to be detected efficiently and accurately. Conventional outlier detection method is based on observing the full dataset and its statistical distribution. The data is assumed stationary. However, this conventional method has an inherent limitation—it always assumes the availability of the entire dataset. In modern applications, especially those that operate in the real time environment, the data arrive in the form of live data feed; they are dynamic and ever evolving in terms of their statistical distribution and concepts. Outlier detection should no longer be done in batches, but in incremental manner. In this chapter, we investigate into this important concept of MOD. In particular, we evaluate the effectiveness of a collection of incremental learning algorithms which are the underlying pattern recognition mechanisms for MOD. Specifically, we combine incremental learning algorithms into three types of MOD - Global Analysis, Cumulative Analysis and Lightweight Analysis with Sliding Window. Different classification algorithms are put under test for performance comparison.
Chapter Preview
Top

1. Introduction: Background Of Outlier Detection Techniques

Numerous researchers have attempted to apply different techniques in detecting outlier, which are generally referred to as defined in the following. “An outlier is an observation that deviates so much from other observations as to arouse suspicions that is was generated by a different mechanism” (Hawkins, 1980).“An outlier is an observation (or subset of observations) which appear to be inconsistent with the remainder of the dataset” (Barnet & Lewis, 1994).

Researchers generally focus on the observation of data irregularities, how each data instance relates to the others (the majority), and how such data instances relate to classification performance. Most of these techniques can be grouped into the following three categories: distribution-based, distance-based, and density-based methods.

1.1 Distribution-Based Outlier Detection Methods

These methods are commonly based on statistical analysis. Detection techniques proposed in the literature range from finding extreme values beyond a certain number of standard deviations to complex normality tests. However, most data distribution models are directly applied into the incoming test data; often the data streams are univariate. Thus, these methods that were designed to work on univariate data may be unsuitable for moderately high-dimensional data. Grubbs proposed a method that attempts to remediate the problem by computing the difference between the mean values as Z per attribute. For each particular attribute the testing value is then divided by the standard deviation which is computed from all the attribute values. The Z value is then tested against some significance level ranging from 1% to 5%. No pre-defined parameters are needed for initiating these techniques because they are all derived from the data directly. Nevertheless, the success is largely depending on the availability of exemplars in the data. The more of the available records, the higher the chances of obtaining representative sample statistically would be (Grubbs, 1969).

In the work of (Aggarwal & Yu, 2001), the authors adopted a special outlier detection approach in which the behavior projected by the dataset is examined. If a point is sparse in a lower low-dimensional projection, the data it represents are deemed abnormal and are removed. Brute force, or at best, some form of heuristics, is used to determine the projections. A similar method outlined by (Zhang, Ramakrishnan & Livny, 1996) builds a height-balanced tree containing clustering features on non-leaf nodes and leaf nodes. Leaf nodes with a low density are then considered outliers and are filtered out.

Complete Chapter List

Search this Book:
Reset