Aras: A Method with Uniform Distributed Dataset to Solve Data Warehouse Problems for Big Data

Aras: A Method with Uniform Distributed Dataset to Solve Data Warehouse Problems for Big Data

Mohammadhossein Barkhordari (Information and Communication Technology Research Center, Advance Information System Research Group, Tehran, Iran) and Mahdi Niamanesh (Information and Communication Technology Research Center, Advance Information System Research Group, Tehran, Iran)
Copyright: © 2017 |Pages: 14
DOI: 10.4018/IJDST.2017040104

Abstract

Because of to the high rate of data growth and the need for data analysis, data warehouse management for big data is an important issue. Single node solutions cannot manage the large amount of information. Information must be distributed over multiple hardware nodes. Nevertheless, data distribution over nodes causes each node to need data from other nodes to execute a query. Data exchange among nodes creates problems, such as the joins between data segments that exist on different nodes, network congestion, and hardware node wait for data reception. In this paper, the Aras method is proposed. This method is a MapReduce-based method that introduces a data set on each mapper. By applying this method, each mapper node can execute its query independently and without need to exchange data with other nodes. Node independence solves the aforementioned data distribution problems. The proposed method has been compared with prominent data warehouses for big data, and the Aras query execution time was much lower than other methods.
Article Preview
Top

Introduction

Historical data storing for data analysis is an important requirement. In recent decades, a data warehouse is used to achieve data analysis goals and is the foundation for analytic reports, data mining and business intelligence that play a great role in decision making in organizations, companies, etc. The volume of data from various data sources such as transaction processing systems, management information systems, sensors, devices and etc. that is growing in an exponential pattern complicates data storage and retrieval in the data warehouse, leading to bottlenecks. High-volume data, variety and velocity is known as big data. To manage big data, a paradigm shift for data storage and hardware architecture is required. In legacy systems, data were stored and retrieved on a single hardware node, and all data warehouse components (dimensions, measures and fact table) were on the same node. Today, because of the huge amount of data, data distribution over hardware nodes is inevitable.

For data storage and retrieval over hardware nodes, different architectures are used, such as shared memory and shared nothing. In a shared memory architecture, each hardware node has its own CPU (Central processing unit) and temporary memory (RAM) but permanent memory (HDD) is common among nodes. Some database management systems (DBMS) use this architecture. They use a storage area network (SAN) to store data. This architecture has prominent problems. In addition to requiring complex configuration for hardware nodes, a shared memory architecture has hardware node limitations that make this architecture unusable for big data. Another architecture is a shared nothing. In this architecture, each node has its own CPU, RAM and HDD. MapReduce (Dean et al., 2008) is a programming method that is used for a shared nothing architecture. This method is appropriate to use for big hardware clusters, but this architecture has its own problems, of which some important ones are as follows:

  • Join among data segments over hardware nodes;

  • Network congestion;

  • Hardware usage inefficiency;

  • Change dimensions and measures.

When a data warehouse is fragmented over nodes, the first and most important problem that appears is joins among data segments. For example, information about customers and their transactions are allocated to various nodes; therefore, to achieve the query results, information should be retrieved from different nodes. To join m data segments (table) when their data are distributed on n hardware nodes, the execution time is determined as Formula 1.

  • Formula 1: Distributed data warehouse(DDW) retrieve time r shared nothing architecture:

    IJDST.2017040104.m01

In Formula 1, there are two types of execution time. The first type is the time which is spent to retrieve data for each node, and the second type is the time which is spent to retrieve data for other nodes in which, IJDST.2017040104.m02 is the required time to retrieve the data from the local table, IJDST.2017040104.m03 is the required time to join the results from the local table with the next local table, IJDST.2017040104.m04 is the required time to send the retrieved results to other nodes over the network, IJDST.2017040104.m05 is the required time to receive other nodes retrieved data over the network, IJDST.2017040104.m06 is the required time to join the received results from the other nodes with the local table, IJDST.2017040104.m07 is the required time to receive the data from other nodes, IJDST.2017040104.m08 is the required time to join the received data from the other nodes with the local table and IJDST.2017040104.m09 is the required time to send the results for the owner node over network.

Complete Article List

Search this Journal:
Reset
Open Access Articles
Volume 11: 4 Issues (2020): 1 Released, 3 Forthcoming
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing