Examining Big Data Management Techniques for Cloud-Based IoT Systems

Examining Big Data Management Techniques for Cloud-Based IoT Systems

Jai Prakash Bhati, Dimpal Tomar, Satvik Vats
DOI: 10.4018/978-1-5225-3445-7.ch009
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter provides an insight into big data, its technical background, and how need for it has arisen globally. The evolution of Cloud technology provides a favorable environment for IoTs to nurture and flourish, creating an exponential increase in the amount of data. The Cloud environment provides easy access to this vast data from anywhere on the globe, but this availability has given rise to some challenges for organizations in managing big data efficiently. The chapter discusses the key concepts and technical and architectural principles of big data technologies that help to curb the challenges in managing big data generated by IoTs in the Cloud environment and identifies the important research directions in this area.
Chapter Preview
Top

1. Introduction

The world is inundated with data. In an expansive scope of various application areas, data is being gathered at an extraordinary scale. For instance, every customer transactions are being handled by Walmart and then import those transactions into databases and which are estimated to hold more than 2.5 petabytes of data. Another popular social site Facebook, each day handles 250 million photos uploads and the interaction of more than 800 million users with more than 900 million objects. Around more than 5 billion people calling, tweeting, browsing and texting over mobile phones. That much explosion of data is the after effect of dramatic rise in devices situated at the outskirt if networking systems including sensors, cell phones and tablet pcs. The greatest part of this data creates new prospects to find more esteem in human genomics, social insurance, oil and gas, finance, search, surveillance and numerous different zones. As the world is entering the era of “big data”. As the digital data being generated from enormous disparate sources (tweets, images and messages uploaded on social media, banking transactions, stock exchange transactions etc.) is flooding the planet. There is a buzz everywhere on the globe about none other than the “Big Data”. It is emerging as new realm of Technology that is making everyone to think about it and to adopt it. What makes it so lucrative from the business point of view is its distinct approach and new techniques in dealing with the vast data frequently produced from various distinct sources. The traditional approaches that are being in use since biblical time for data analysis were based on Statistics. In these traditional statistical approaches, approximate measurement of a population is done via process of sampling. On the other hand big data solutions add up new approaches and techniques in dealing with huge data.

1.1 Big Data, “Big Thing”

The results of information technology are anything but difficulty to see a mobile phone in each pocket, a pc in bag pack and huge IT framework work places all around. In any case the less recognizable is the data itself. The evolution of data is not new. It proceeds with a pattern that began in 1970s. What has changed is the velocity and diversity of data. The results in large volume of structured and unstructured data, and the term were coined as “big data”. However, there is no rigid definition of Big Data. Big Data is an analogous term portraying a circumstance dedicated to store a large volume of data that is originated very frequently from disparate sources, to process it and perform analysis of large data sets. For a data set to be viewed as big data, it must have at least one attribute that must be able to adapt a proper solution design and architecture for analysis. The traits of Big Data will help to determine the relevant data which is actually featured to be “Big” from the massive amount of data. These characteristics of Big Data are generally introduced by the five “Vs of Big Data” – volume, velocity, variety, veracity and value (Thomas Erl, et al, 2015).

  • Volume: Big data analytics and solution process a large volume of data which is ever growing and generous. Massive data volume forces to adopt specific data storage and various processing requests. The ability to process anticipated volume of data is the main reason which draws the attention towards big data analytics, for example- Library of Congress.

  • Velocity: In big data environments, velocity means the increasing rate at which information flows into an arguments and has taken after a comparative example to that of volume. Data velocity is put into point of view while considering that the data volume can be easily created in a moment: hundreds hours of video uploaded on YouTube, over 300 million emails, millions of photos being uploaded on Facebook, millions of transaction handled by Walmart.

  • Variety: The next dimension of big data is variety. Big data solution support multiple formats to store structured as well as unstructured data as every unstructured form of data cannot be easily fit into relational databases. This is called data variety which leads to various challenges during data transformation, integration processing, and storage.

  • Veracity: Veracity refers to accuracy or quality of data. As people from various domain are dealing with massive amount of data which is generated with a high velocity and in multiple forms and it is obvious that the accuracy level can never be 100% because of the presence of ambiguities, inconsistency, latency, and deception in data.

  • Value: It is one of the most important factors of big data as it determines the potential worth of data. Higher the quality of data, the higher will be the value of data for an organization.

Complete Chapter List

Search this Book:
Reset