Flexible MapReduce Workflows for Cloud Data Analytics

Flexible MapReduce Workflows for Cloud Data Analytics

Carlos Goncalves, Luis Assuncao, Jose C. Cunha
Copyright: © 2013 |Pages: 17
DOI: 10.4018/ijghpc.2013100104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Data analytics applications handle large data sets subject to multiple processing phases, some of which can execute in parallel on clusters, grids or clouds. Such applications can benefit from using MapReduce model, only requiring the end-user to define the application algorithms for input data processing and the map and reduce functions, but this poses a need to install/configure specific frameworks such as Apache Hadoop or Elastic MapReduce in Amazon Cloud. In order to provide more flexibility in defining and adjusting the application configurations, as well as in the specification of the composition of the application phases and their orchestration, the authors describe an approach for supporting MapReduce stages as sub-workflows in the AWARD framework (Autonomic Workflow Activities Reconfigurable and Dynamic). The authors discuss how a text mining application is represented as a complex workflow with multiple phases, where individual workflow nodes support MapReduce computations. Access to intermediate data produced during the MapReduce computations is supported by a data sharing abstraction. The authors describe two implementations of this abstraction, one based on a shared tuple space and another based on an in-memory distributed key/value store. The authors describe the implementation of the framework, a set of developed tools, and our experimentation with the execution of the text mining algorithm over multiple Amazon EC2 (Elastic Compute Cloud) instances, and report on the speed-up and size-up results obtained up to 20 EC2 instances and for different corpus sizes, up to 97 million words.
Article Preview
Top

Recent research in data analytics applications (Melnik et al., 2011) is influencing the developments on programming models and support tools and environments. The need to process large-scale data sets requires the runtime capability for exploring data-parallelism automatically and dynamically. There is also a requirement for application decomposition in functionally specialized processing stages, with efficient data transfer and communication between stages.

These concerns have been addressed through several approaches based on the MapReduce model (Dean & Ghemawat, 2004)), which emerged as a widely used programming model, due to its conceptually simple programming interface for specifying the input data format, and the Map and Reduce functions, and for enabling efficient implementations on different platforms and architectures (Dean & Ghemawat, 2004; Apache Hadoop, 2012; Grossman & Gu, 2008; Amazon EMR, 2012; Riteau et al., 2011; Gunarathne et al., 2010; He et al., 2008).

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024)
Volume 15: 2 Issues (2023)
Volume 14: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 13: 4 Issues (2021)
Volume 12: 4 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing