Big Data Tools for Computing on Clouds and Grids

Big Data Tools for Computing on Clouds and Grids

Forest Jay Handford
DOI: 10.4018/978-1-5225-3142-5.ch006
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The number of tools available for Big Data processing have grown exponentially as cloud providers have introduced solutions for businesses that have little or no money for capital expenditures. The chapter starts by discussing historic data tools and the evolution to those of today. With Cloud Computing, the need for upfront costs has been removed, costs are continuing to fall and costs can be negotiated. This chapter reviews the current types of Big Data tools, and how they evolved. To give readers an idea of costs, the chapter shows example costs (in today's market) for a sampling of the tools and relative cost comparisons of the other tools like the Grid tools used by the government, scientific communities and academic communities. Readers will take away from this chapter an understanding of what tools work best for several scenarios and how to select cost effective tools (even tools that are unknown today).
Chapter Preview
Top

Storage Systems

The most crucial component to Big Data tools is the underlying storage system. The exception is that some organizations do not need the data once it is processed and they process the data as they get it. These organizations are fortunate in that they never have to save the data to storage.

Key Terms in this Chapter

Stream Processing: A subset of Real time Processing that only processes data as it is received.

Simple Storage Solution (S3): Amazon Web Services’ binary large object storage solution.

Lambdoop: A discontinued lambda architecture product by the company of the same name.

Structured Query Language (SQL): An open standard for relational databases. It stands for Structured Query Language.

Google BigQuery: Google’s batch processing Big Data product that is available in the public cloud.

Lambda Architecture: A hybrid of real time and batch system for processing Big Data.

Real Time Processing: A system to process data as it is received or can almost instantaneously answer an organization’s question using their existing data.

Elastic MapReduce (EMR): An AWS cloud MapReduce product.

Binary Large Object (BLOB) Storage: A storage system that is sold by public cloud providers allowing customers to store files of any type and almost any size.

Azure: Microsoft's public cloud offering.

Quantum Computing: Computer systems that store data in qubits rather than binary.

Public Cloud/Cloud: IT infrastructure that is sold over the internet to customers. Public cloud vendors sell a plethora of services including virtual machines, storage and Big Data processing tools.

Elastic Cloud Compute (EC2): The Infrastructure as a Service (IaaS) offering of AWS that allows customers to rent virtual machines in the cloud.

Relational Database Management System (RDBMS): A system to store data in key and value pair combinations within tables.

IT: Information Technology.

Hadoop: An open source software tool maintained by Apache for batch processing Big Data using MapReduce.

Private Cloud: IT infrastructure within an organization that can only have processes, virtual machines and storage allocated by the organization. Some private clouds have no access to the internet at all, while others provide services to the organization's customers through the internet.

Network Storage System (NeST): An open source storage solution for data grids.

Batch Processing: A system of processing data that takes hours, days or weeks to complete. A batch job will be started to answer an organization’s question from their existing data.

Complete Chapter List

Search this Book:
Reset