On the Exploration of Equal Length Cellular Automata Rules Targeting a MapReduce Design in Cloud

On the Exploration of Equal Length Cellular Automata Rules Targeting a MapReduce Design in Cloud

Arnab Mitra, Anirban Kundu, Matangini Chattopadhyay, Samiran Chattopadhyay
Copyright: © 2018 |Pages: 26
DOI: 10.4018/IJCAC.2018040101
(Individual Articles)
No Current Special Offers


A MapReduce design with Cellular Automata (CA) is presented in this research article to facilitate load-reduced independent data processing and cost-efficient physical implementation in heterogeneous Cloud architecture. Equal Length Cellular Automata (ELCA) are considered for the design. This article explores ELCA rules and presents an ELCA based MapReduce design in cloud. New algorithms are presented for i) synthesis, ii) classification of ELCA rules, and iii) ELCA based MapReduce design in Cloud. Shuffling and efficient reduction of data volume are ensured in proposed MapReduce design.
Article Preview


Modern days’ many of our daily life applications are based on Cloud Computing. Cloud computing is described as “computing as a service over the Internet” at “pay-for-use basis” (in “http://searchcloudcomputing.techtarget.com/definition/MapReduce”).

  • JobTracker (JT): A node (server) is responsible for managing all jobs and resources in a cluster;

  • TaskTrackers (TT): Agents positioned at each node (client) in cluster to execute map and trim down tasks;

  • JobHistoryServer (JHS): A component that keeps track of completed modules of job; typically implemented as a separate function or along with JT.

Fault-tolerance is an essential criterion for MapReduced computing; each node periodically updates its condition to a master node; otherwise, master node re-assigns job to other available computing units. Applications of MapReduce (Zhouet al., 2011; Koundinya et al, 2012; Padhy et al., 2013; and Yang et al., 2014), ‘Hadoop’ (Yamamoto et al., 2012; Yang et al., 2012; Lin et al., 2013; Liu et al., 2015; and Río et al., 2015) were described for big data processing in clouds. ‘Hadoop’ is a popular third-party deployment of MapReduce in distributed computing. ‘Hadoop’ provides a distributed computing architecture for big data. “Hadoop Distributed File System” (HDFS) is provided to store multiple copies of data blocks in available clusters of computing nodes. A file in HDFS is decomposed into chunks; each chunk is assigned to different working machine. Equal load distribution is an important criterion in MapReduced computation. Hence a cost-effective approach towards equally load distributed MapReduce may be beneficial in clouds.

Complete Article List

Search this Journal:
Volume 14: 1 Issue (2024)
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing