“Saksham Model” Performance Improvisation Using Node Capability Evaluation in Apache Hadoop

“Saksham Model” Performance Improvisation Using Node Capability Evaluation in Apache Hadoop

Ankit Shah, Mamta C. Padole
Copyright: © 2020 |Pages: 25
DOI: 10.4018/978-1-5225-9750-6.ch012
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Big Data processing and analysis requires tremendous processing capability. Distributed computing brings many commodity systems under the common platform to answer the need for Big Data processing and analysis. Apache Hadoop is the most suitable set of tools for Big Data storage, processing, and analysis. But Hadoop found to be inefficient when it comes to heterogeneous set computers which have different processing capabilities. In this research, we propose the Saksham model which optimizes the processing time by efficient use of node processing capability and file management. The proposed model shows the performance improvement for Big Data processing. To achieve better performance, Saksham model uses two vital aspects of heterogeneous distributed computing: Effective block rearrangement policy and use of node processing capability. The results demonstrate that the proposed model successfully achieves better job execution time and improves data locality.
Chapter Preview
Top

Apache Hadoop

Apache Hadoop (Hadoop.apache.org, 2018) is an open-source framework especially developed for the purpose of distributed computing for big data. Hadoop has become widely popular due to its adaptability of commodity hardware. Hadoop has a better edge in terms of performance in homogeneous environment rather than the heterogeneous one (Dean and Ghemawat, 2008). Hadoop comprises of three important components: Hadoop Distributed File System (HDFS), Yet Another Resource Negotiator (YARN) and MapReduce.

  • 1.

    HDFS (Shvachko et al., 2010): It allows to split the dataset holding big data into multiple blocks and stores them to various datanodes in the distributed file system. Namenode maintains the metadata for the distributed blocks.

  • 2.

    YARN (Vavilapalli et al., 2013): It separates the resource management layer and processing components layer. YARN is responsible for managing resources of Hadoop cluster.

  • 3.

    MapReduce (Dean and Ghemawat, 2008): It is a programming framework on top of YARN, responsible for the processing of big data that enables enormous scalability across thousands of computing devices run on a Hadoop cluster.

Figure 1.

Hadoop 2.0 architecture

978-1-5225-9750-6.ch012.f01

Complete Chapter List

Search this Book:
Reset