On the Effectiveness of Hybrid Canopy with Hoeffding Adaptive Naive Bayes Trees: Distributed Data Mining for Big Data Analytics

On the Effectiveness of Hybrid Canopy with Hoeffding Adaptive Naive Bayes Trees: Distributed Data Mining for Big Data Analytics

Mrutyunjaya Panda
Copyright: © 2017 |Pages: 14
DOI: 10.4018/IJAEC.2017040102
(Individual Articles)
No Current Special Offers


The Big Data, due to its complicated and diverse nature, poses a lot of challenges for extracting meaningful observations. This sought smart and efficient algorithms that can deal with computational complexity along with memory constraints out of their iterative behavior. This issue may be solved by using parallel computing techniques, where a single machine or a multiple machine can perform the work simultaneously, dividing the problem into sub problems and assigning some private memory to each sub problems. Clustering analysis are found to be useful in handling such a huge data in the recent past. Even though, there are many investigations in Big data analysis are on, still, to solve this issue, Canopy and K-Means++ clustering are used for processing the large-scale data in shorter amount of time with no memory constraints. In order to find the suitability of the approach, several data sets are considered ranging from small to very large ones having diverse filed of applications. The experimental results opine that the proposed approach is fast and accurate.
Article Preview

1. Introduction

In this era of big data, there is a tremendous change in the analysis of traditional data mining techniques. As per (Gartner, 2015; and Loney, 2001) big data is an abstract concept that can be described in terms of 3 V’s as high volume, velocity and variety of information along with recent 2 V’s as: variability and value to obtain meaningful insights for effective, efficient and cost effective yet innovative decision making. Even though there are different opinions what a big data is?, still, general perception about big data stems from the fact which cannot be perceived, acquired, managed or processes by traditional methods within an acceptable time limit. However, this is contradicted by many stating that this notion of big data is assumed to benefit the giant software industries while totally neglecting the basic requirement of a typical user. Big data is often exaggerated and for most of the users, the physical size of the data is rarely an issue, argued by the authors (Crotty et al.2015). They also stated that big companies like: Facebook, Google, Yahoo! etc. use big data analytic job hardly above 100GB (Rowstron, Narayan, Donnelly et al., 2012) and Cloudera customers within few TB of data (Chen, Alspaugh and Katz, 2012). Even though Big Data is a hot area of research, it is not away from controversies (Boyd and Crawford, 2012). Some of them includes, but not restricted to (W. Fan and A. Bifet, 2012): (i) Big data analytics is same as traditional data analytics, as data growing is continuous over time, (ii) data recency is most important than its size during real time analysis, (iii) Big data are not always the best data for analysis. It may contain some noise as Twitter data cannot be considered as the data for counting global population and finally, most importantly, the big data management companies try to sell their products by creating a hype in the minds of the user which in due course of time may not be the best programming choice in mapreduce or Hadoop based systems. Despite so many contradiction in the definition of big data, the research in this direction must go on by dealing extraction, usefulness and transformation of a bag of data to ‘Big data’ (Chen, Mao & Liu, 2014).

1.1. Motivation

The main motivation begins here by considering the limits of hardware that can be provided with various machine learning algorithms; large dataset may be reduced to an extent that will be useful for the algorithm to perform efficiently, both in terms of speed and accuracy. Data reduction techniques are considered to be better in comparison to the full iteration on the big dataset, so that data is reduced with no loss of information and most importantly, after reduction, the big dataset would fit into the memory and as a result of which, the performance of the algorithm is enhanced to a greater extent. Clustering algorithm in this scenario fits well to analyse the dataset in an effective manner.

Complete Article List

Search this Journal:
Volume 14: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 13: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing