Article Preview
TopIntroduction
Historical data storing for data analysis is an important requirement. In recent decades, a data warehouse is used to achieve data analysis goals and is the foundation for analytic reports, data mining and business intelligence that play a great role in decision making in organizations, companies, etc. The volume of data from various data sources such as transaction processing systems, management information systems, sensors, devices and etc. that is growing in an exponential pattern complicates data storage and retrieval in the data warehouse, leading to bottlenecks. High-volume data, variety and velocity is known as big data. To manage big data, a paradigm shift for data storage and hardware architecture is required. In legacy systems, data were stored and retrieved on a single hardware node, and all data warehouse components (dimensions, measures and fact table) were on the same node. Today, because of the huge amount of data, data distribution over hardware nodes is inevitable.
For data storage and retrieval over hardware nodes, different architectures are used, such as shared memory and shared nothing. In a shared memory architecture, each hardware node has its own CPU (Central processing unit) and temporary memory (RAM) but permanent memory (HDD) is common among nodes. Some database management systems (DBMS) use this architecture. They use a storage area network (SAN) to store data. This architecture has prominent problems. In addition to requiring complex configuration for hardware nodes, a shared memory architecture has hardware node limitations that make this architecture unusable for big data. Another architecture is a shared nothing. In this architecture, each node has its own CPU, RAM and HDD. MapReduce (Dean et al., 2008) is a programming method that is used for a shared nothing architecture. This method is appropriate to use for big hardware clusters, but this architecture has its own problems, of which some important ones are as follows:
- •
Join among data segments over hardware nodes;
- •
Network congestion;
- •
Hardware usage inefficiency;
- •
Change dimensions and measures.
When a data warehouse is fragmented over nodes, the first and most important problem that appears is joins among data segments. For example, information about customers and their transactions are allocated to various nodes; therefore, to achieve the query results, information should be retrieved from different nodes. To join m data segments (table) when their data are distributed on n hardware nodes, the execution time is determined as Formula 1.
In Formula 1, there are two types of execution time. The first type is the time which is spent to retrieve data for each node, and the second type is the time which is spent to retrieve data for other nodes in which,
is the required time to retrieve the data from the local table,
is the required time to join the results from the local table with the next local table,
is the required time to send the retrieved results to other nodes over the network,
is the required time to receive other nodes retrieved data over the network,
is the required time to join the received results from the other nodes with the local table,
is the required time to receive the data from other nodes,
is the required time to join the received data from the other nodes with the local table and
is the required time to send the results for the owner node over network.