Outlier Detection in Big Data

Outlier Detection in Big Data

Victoria J. Hodge
Copyright: © 2014 |Pages: 10
DOI: 10.4018/978-1-4666-5202-6.ch157
(Individual Chapters)
No Current Special Offers


Outlier detection (or anomaly detection) is a fundamental task in data mining. Outliers are data that deviate from the norm and outlier detection is often compared to “finding a needle in a haystack.” However, the outliers may generate high value if they are found, value in terms of cost savings, improved efficiency, compute time savings, fraud reduction and failure prevention. Detection can identify faults before they escalate with potentially catastrophic consequences. Big Data refers to large, dynamic collections of data. These vast and complex data appear problematic for traditional outlier detection methods to process but, Big Data provides considerable opportunity to uncover new outliers and data relationships. This chapter highlights some of the research issues for outlier detection in Big Data and covers the solutions used and research directions taken along with an analysis of some current outlier detection approaches for Big Data applications.
Chapter Preview


This chapter will examine the issues posed by Big Data for the task of outlier detection. An outlier (Hodge, 2011) (often called an anomaly (Chandola, Banerjee, & Kumar, 2009) in the literature) is a particular data point or, in some instances, a small set of data points that is inconsistent with the rest of the data population as shown in Figure 1.

Figure 1.

The graph on the left includes three outliers (A-C) and a small cluster of outliers. The graph on the right represents time-series data with a single point outlier (A) and an outlying section (B).


“Big Data” refers to large, dynamic collections of data. Data sources are generating more and more data while increasing numbers of decentralized data sources are added everyday as interconnection and data exchange become easier. Typical features of Big Data are: data comprising trillions of records where the data is loosely structured; delivered from heterogeneous data sources in heterogeneous data formats; often streamed in real-time and at high volume; and, often distributed either across local computer clusters or across separate geographically distinct sites driven by Big Data mechanisms such as cloud computing and on-line services. Such data may be problematic for traditional outlier tools and techniques to process. This chapter studies when and where outlier detection is used and examines the problems posed and the solutions produced for outlier detection on Big Data. It then analyzes the future directions for outlier detection in Big Data.



Outlier detection or anomaly detection has been used for centuries to detect and remove anomalous data points from data. The original methods were arbitrary but today, principled and systematic techniques are used. These include (Hodge, 2011): distance-based; density-based; statistical (including regression); machine learning (including decision trees, expert systems and clustering); information theory; spectral decomposition; neural networks; support vector machines (SVMs); and, natural computation derived from artificial immune systems. Outlier detection distinguishes outlier data from normal data using either: abnormality detection which compares new data to a model of normality (or a model of abnormality); or, outlier classification which classifies new data as either normal or abnormal. Outlier detection can also use time-series or sequence analysis to detect changes in temporal patterns.

Key Terms in this Chapter

Fault Detection: The task of finding failures in hardware or software.

Anomaly: Datum that deviates from the norm (often used interchangeably with “outlier”).

Data Mining: The process of analyzing data from different perspectives to predict future behavior and trends.

Outlier Detection: The task of finding outliers in a business’s data. It is considered a fundamental task in data mining.

Outlier: Datum that deviates from the norm.

Big Data: Large, dynamic and unstructured collections of data often distributed and streamed.

Business Analytics: The analysis of a business’s data to gain insight into the business.

Distributed: Data storage and processing that is performed in different locations connected by transmission links.

Anomaly detection: The task of finding anomalies in a business’s data. Some authors use “anomaly detection” to specifically refer to network intrusion detection.

Complete Chapter List

Search this Book: