Big Data Architecture: Storage and Computation

Big Data Architecture: Storage and Computation

Siddhartha Duggirala
DOI: 10.4018/978-1-4666-5864-6.ch006
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

With the unprecedented increase in data sources, the question of how to collect them efficiently, effectively, and elegantly, store them securely and safely, leverage those stocked, polished, and maintained data in a smarter manner so that industry experts can plan ahead, take informed decisions, and execute them in a knowledgeable fashion remains. This chapter clarifies several pertinent questions and related issues with the unprecedented increase in data sources.
Chapter Preview
Top

Introduction

We live in the age of Data. Eric Schmidt famously said in 2010 that every day we create as much data as was created in total from beginning of written history through 2003. With the propulsion of mobile devices, sensors, search logs, online search, digital social lives we are generating about 2200 Petabytes of data every day (Kirkpatrick, R., 2013).

Google, Amazon, Facebook, Twitter, Foursquare, McDonalds and lots of other companies build their empires, enriched those empires using the data we generate (Kohavi, R., 2009).

  • That being said, what is data? Data is a collection of facts, opinions and responses.

  • Is Big Data (or even extreme data as some people like to call it) nothing but hyped version of normal data? Well, the major distinction comes from the 3 V’s Volume (Petabytes per day), Variety (structured data like RDBMS, Unstructured like search logs, tweets, images, videos excreta), Velocity (real time capture) characterizing Big Data. While the traditional data mainly sit in RDBMS, Big Data otherwise the extreme data also encompass a different domain of data storage other than normal structured data.

  • Mere definition of Big Data would be “A massive volume of both structured and Unstructured data that is so large that it’s difficult to process with traditional databases, software techniques” (Big Data: New frontiers of IT Management)

  • Why do we have to store and analyze this data anyway? Simply there is a lot of potential in data which when observed at, analyst can create world class wonders. There has been a lot of research on this and these are the few reports you can go through? (Bryant, R.E., 2008; Manyika, Brown, 2011)

Since, we answered the questions “Why Big Data?”, “what is Big Data?” let’s answer most relevant basic question to us, How to store and leverage Big Data? Let’s start answering the question by agreeing on the fact that Big-Data isn’t just data growth, nor is it a single technology; rather, it’s a set of processes and technologies that can crunch through substantial data set quickly to make complex, often real-time decisions. We will study technological and technical advancements that fueled Big data phenomenon, in the next section.

In 3rd section we will move on to Hadoop, Sector-Sphere and various other software frameworks enabling us to compute at Big data scale.

Top

Technical And Technological Advancements

There are a lot of ways to store and analyze data. Let’s move on to technologies that enabled us to analyze data. And let’s also look at various analyses that are predominantly used on Big data.

A/B Testing

A/B testing as the name sounds we have to decide which version A and B is better. To do this we experiment simultaneously. At the end we select the version which is more successful (Brain, 2012).

Associated Rule Learning

Set of techniques for discovering interesting patterns/relationships among variables in large databases.

Beowulf Cluster

The project started in mid-90. It was initially a cluster of 16 Dx4 connected by channel bounded Ethernet links. The cluster structure would be of parent and children kind of hierarchical structure. The client submits jobs to parent node, which in turn handovers the jobs and data to children node for processing and which in turn sends output to the parent node, parent node aggregates the output of children node do some further processing and gives out the final output to the client. Writing the programs for child node and parent node might get a little tricky.

Classification

A set of techniques in which we assign new data points to different classes, based on training data points and their corresponding classes (supervised learning).

Key Terms in this Chapter

Computation: Computation is any kind of calculation there by processing some information.

Adaptability: An ability of a system to change or to be changed in order to fit or even work better in some situation or for some purpose.

Architecture: Architecture is an approach to present Structures at macro level. This structure masks all the detailed operational issues for the common user of the services that the structure can provide.

Scalibility: A characteristic of a system, model or function that describes its capability to cope and perform under an increased or expanding workload.

Storage: Storage is a method or action of retaining data for future use. The maintenance or retention of retrievable data on a computer or any other form of electronic system, or memory.

Database: Systematically organized or structured repository of indexed information that allows easy retrieval, updating, analysis, and output of data.

Complete Chapter List

Search this Book:
Reset