Big Data Analysis and Mining

Big Data Analysis and Mining

Copyright: © 2018 |Pages: 11
DOI: 10.4018/978-1-5225-2255-3.ch030
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

As new forms of information science and technology, big data analysis and mining aims to discover implicit, previously unknown, and potentially useful information and knowledge from big databases that contain high volumes of valuable veracious data collected or generated at a high velocity from a wide variety of data sources. Among different big data mining tasks, this chapter focuses on big data analysis and mining for frequent patterns. By relying on the MapReduce programming model, researchers only need to specify the “map” and “reduce” functions to discover frequent patterns from (1) big databases of precise data in a breadth-first manner or in a depth-first manner and/or from (2) big databases of uncertain data. Such a big data analysis and mining process can be sped up. The resulting (constrained or unconstrained) frequent patterns mined from big databases provide users with new insights and a sound understanding of users' patterns. Such knowledge is useful is many real-life information science and technology applications.
Chapter Preview
Top

Background

Since the introduction of the research problem of frequent pattern mining (Agrawal, Imieliński, & Swami, 1993), numerous algorithms have been proposed (Hipp, Güntzer, & Nakhaeizadeh, 2000; Ullman, 2000; Ceglar & Roddick, 2006). Notable ones include the classical Apriori algorithm (Agrawal & Srikant, 1994) and its variants such as the Partition algorithm (Savasere, Omiecinski, & Navathe, 1995). The Apriori algorithm uses a level-wise breadth-first bottom-up approach with a candidate generate-and-test paradigm to mine frequent patterns from transactional databases of precise data. The Partition algorithm divides the databases into several partitions and applies the Apriori algorithm to each partition to obtain patterns that are locally frequent in the partition. As being locally frequent is a necessary condition for a pattern to be globally frequent, these locally frequent patterns are tested to see if they are globally frequent in the databases. To avoid the candidate generate-and-test paradigm, the tree-based FP-growth algorithm (Han, Pei, & Yin, 2000) was proposed. It uses a depth-first pattern-growth (i.e., divide-and-conquer) approach to mine frequent patterns using a tree structure that captures the contents of the databases. Specifically, the algorithm recursively extracts appropriate tree paths to form projected databases containing relevant transactions and to discover frequent patterns from these projected databases.

In various real-life business, engineering, scientific applications in modern organizations and society, the available data are not precise data but uncertain data (Tong et al., 2012; Leung, Cuzzocrea, & Jiang, 2013; Leung, MacKinnon & Tanbeer, 2014; Jiang & Leung, 2015; Ahmed et al., 2016). Examples include sensor data and privacy-preserving data. Over the past few years, several algorithms have been proposed to mine and analyze these uncertain data. The tree-based UF-growth algorithm (Leung, Mateo, & Brajczuk, 2008) is an example.

Key Terms in this Chapter

MapReduce: A high-level programming model, which uses the “map” and “reduce” functions, for processing high volumes of data.

Itemset: A set of items.

Anti-Monotonic Constraint: A constraint C such that, if an itemset S satisfying C, then any subset of S also satisfies C.

Big Data: High-velocity, high-value, and/or high-variety data with volumes beyond the ability of commonly-used software to capture, manage, and process within a tolerable elapsed time. These Big data necessitate new forms of processing to deliver high veracity (& low vulnerability) and to enable enhanced decision making, insight, knowledge discovery, and process optimization.

Frequent Pattern (Or Frequent Itemset): An itemset or a pattern with its actual support (or expected support) exceeds or equals the user-specified minimum support threshold.

Succinct Constraint: A constraint C such that all itemsets satisfying C can be expressed in terms of powersets of a fixed number of succinct sets using the set union and/or set difference operators. A succinct set is an itemset, in which items are selected from the domain using the usual SQL selection operator. In simple terms, a constraint C is succinct meaning that all and only those itemsets satisfying C can be explicitly and precisely generated using some precise “formula”.

Frequent Pattern Mining: A search and analysis of high volumes of valuable data for implicit, previously unknown, and potentially useful patterns consisting of frequently co-occurring events or objects. It helps discover frequently co-located trade fairs and frequently purchased bundles of merchandise items.

Data Mining: Non-trivial extraction of implicit, previously unknown and potentially useful information from data.

Complete Chapter List

Search this Book:
Reset