HDAC High-Dimensional Data Aggregation Control Algorithm for Big Data in Wireless Sensor Networks

HDAC High-Dimensional Data Aggregation Control Algorithm for Big Data in Wireless Sensor Networks

Zeyu Sun, Xiaohui Ji
DOI: 10.4018/IJITWE.2017100105
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The process of high-dimensional data is a hot research area in data mining technology. Due to sparsity of the high-dimensional data, there is significant difference between the high-dimensional space and the low-dimensional space, especially in terms of the data process. Many sophisticated algorithms of low-dimensional space cannot achieve the expected effect, even cannot be used in the high-dimensional space. Thus, this paper proposes a High-dimensional Data Aggregation Control Algorithm for Big Data (HDAC). The algorithm uses information to eliminate the dimension not matching with the specified requirements. Then it uses the principal components method to analyze the rest dimension. Thus, the simplest method is used to reduce the calculation of dimensionality reduction as can as it possible. In the process of data aggregation, the self-adaptive data aggregation mechanism is used to reduce the phenomenon of network delay. Finally, the simulation shows that the algorithm can improve the performance of node energy-consumption, rate of the data post-back and the data delay.
Article Preview
Top

Introduction

The wireless sensor network posts back the information taking data as the center (Stocker et al., 2014). And the energy consumed by node communication is smaller order of magnitude than that consumed by the data calculation (Pantelopoulos et al., 2010). The data acquired by a single sensor node has no meaning for the sink nodes. What the users concern is not that the data is acquired by which sensor node, but is that whether there is any problem inside of detected objects or not (Heinzelman et al., 2002). When the sensor nodes are acquiring data information, it’s difficult to find any abnormal event among the big data. And it’s difficult to definite the abnormal data, and the problem that what kind of data is the abnormal data is also difficult to understand. And due to sparsely of the high-dimensional data, it’s quite difficult to discover the abnormal data (Lu et al., 2012). Thus, to solve the above-mentioned problem, high- dimensional data aggregation is used to control the data classification and aggregation, that is, the local nodes can use its own information and the information sent by the other nodes to process the data information inside of network, and then to eliminate the redundant information, and then upload the information (Yousefi et al., 2012). The data aggregation can reduce the energy consumption, and the opportunity of data aggregation plays an important role in the process of data aggregation. When the sink nodes in the process of data post-back have been confirmed, sensor nodes need to know that how long and what kind of information they will process. Only the opportunity of data aggregation is proper, the exact data information can be sent to the sink nodes within the smaller time-delay.

Complete Article List

Search this Journal:
Reset
Volume 19: 1 Issue (2024)
Volume 18: 1 Issue (2023)
Volume 17: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 16: 4 Issues (2021)
Volume 15: 4 Issues (2020)
Volume 14: 4 Issues (2019)
Volume 13: 4 Issues (2018)
Volume 12: 4 Issues (2017)
Volume 11: 4 Issues (2016)
Volume 10: 4 Issues (2015)
Volume 9: 4 Issues (2014)
Volume 8: 4 Issues (2013)
Volume 7: 4 Issues (2012)
Volume 6: 4 Issues (2011)
Volume 5: 4 Issues (2010)
Volume 4: 4 Issues (2009)
Volume 3: 4 Issues (2008)
Volume 2: 4 Issues (2007)
Volume 1: 4 Issues (2006)
View Complete Journal Contents Listing