Big-Data-Based Architectures and Techniques: Big Data Reference Architecture

Big-Data-Based Architectures and Techniques: Big Data Reference Architecture

Gopala Krishna Behara
Copyright: © 2019 |Pages: 32
DOI: 10.4018/978-1-5225-6210-8.ch002
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter covers the essentials of big data analytics ecosystems primarily from the business and technology context. It delivers insight into key concepts and terminology that define the essence of big data and the promise it holds to deliver sophisticated business insights. The various characteristics that distinguish big data datasets are articulated. It also describes the conceptual and logical reference architecture to manage a huge volume of data generated by various data sources of an enterprise. It also covers drivers, opportunities, and benefits of big data analytics implementation applicable to the real world.
Chapter Preview
Top

Introduction

In Information Age, we are overwhelmed with data, ways to store, process, analyze, interpret, consume and act upon the data. The term Big Data is quite vague and ill defined. The word “Big” is too generic and the question is how “Big” is considered as “Big” and how “Small” is small (Smith, 2013) is relative to time, space and circumstance. The size of “Big Data” is always evolving and the meaning of Big Data Volume would lie between Terabyte (TB) and Zettabyte (ZB) range. The concept of big data is the explosion of data from the Internet, cloud, data center, mobile, Internet of things, sensors and domains that possess and process huge datasets. Cisco claimed that humans have entered the ZB era in 2015 (Cisco, 2017).

Based on social media statistics 2018, the face book claimed that, there are over 300 million photos uploaded to Facebook every day (Nowak & Spiller, 2017). On an average 300 hours of videos are uploaded every minute on You Tube (YouTube, 2017). Approximately, 42 billion texts are sent and 1.6 billion photos shared through Whatsapp daily (Stout, 2018). Since 2005, business investment in hardware, software, talent, and services has increased as much as 50 percent, to $4 trillion (Rijmenam, 2018).

In 2005, Roger Mougalas from O’Reilly Media coined the term Big Data for the first time. It refers to a large set of data that is almost impossible to manage and process using traditional business intelligence tools. During the same year, Yahoo created Hadoop. This was built on top of Google’s MapReduce. Its goal was to index the entire World Wide Web (Rijmenam, 2018).

In 2009, the Indian government decides to take an iris scan, fingerprint and photograph of all of its 1.2 billion inhabitants. All this data is stored in the largest biometric database in the world (Chandra, 2018).

In 2010, at Technomy conference, Eric Schmidt stated, “There were 5 Exabyte’s of information created by the entire world between the dawn of civilization and 2003. Now that same amount is created every two days.” (Schmidt, 2010).

In 2011, McKinsey released a report on Big Data which claimed that, the next frontier for innovation, competition, and productivity, states that in 2018 the USA alone will face a shortage of 140.000 – 190.000 data scientist as well as 1.5 million data managers (Manyika, 2011).

Another detailed review was contributed by Visualizing.org (Hewlett Packard Enterprise, 2017) in Big Data. It is focused on the time line of how to implement Big Data Analytics. Its historical description is mainly determined by events related to the Big Data push by many internet and IT companies such as Google, YouTube, Yahoo, Facebook, Twitter and Apple. It emphasized the significant impact of Hadoop in the history of Big Data Analytics.

In the past few years, there has been a massive increase in Big Data startups, trying to deal with Big Data and helping organizations to understand Big Data and more and more companies are slowly adopting and moving towards Big Data.

Figure 1 shows the history of Big Data and its eco system.

Figure 1.

History of Big Data

978-1-5225-6210-8.ch002.f01

Key Terms in this Chapter

Cloud Computing: Cloud computing is an ICT sourcing and delivery model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

Unstructured Data: The term unstructured data refers to any data, that has little identifiable structure. Images, videos, email, documents, and text fall into the category of unstructured data.

ETL: Extract, transform, load.

OLAP: Online analytical processing.

Open Data: Data which meets the following criteria: accessible (ideally via the internet) at no more than the cost of reproduction, without limitations based on user identity or intent. In a digital, machine readable format for interoperation with other data; and free of restriction on use or redistribution in its licensing conditions.

Structured Data: The term-structured data refers to data that is identifiable and organized in a structured way. The most common form of structured data is a database where specific information is stored based on a methodology of columns and rows. Structured data is machine readable and efficiently organized for human readers.

Data Exhaust: Data exhaust (or digital exhaust) refers to the by-products of human usage of the internet, including structured and unstructured data, especially in relation to past interactions.

Complete Chapter List

Search this Book:
Reset