Article Preview
Top1. Introduction
Most of the users in the world have Internet service. As the number of users are increasing, everyday data is growing exponentially. Various sources, like social media, news channels, telecommunication, scientific labs, and meteorological departments, generate massive data (Lv, 2019; Dong, et al., 2014). Traditional databases cannot store enormous data, and these conventional computing models are not capable of computing massive data (Sun, He, & Lu, 2012). Computing Big data helps to uncover hidden knowledge from it (Chang, et al., 2008). Each day Google generates 2.5 terabytes of data (Dean & Ghemawat, 2008). Facebook users generate 500+ terabytes of data each day (Ghazi & Gangodkar, 2015). These industries cannot store and process Big data on a single server as it is too big to fit and too tedious to compute (Shaw, Singh, & Tripathi, 2018). So these industries use the widely adopted framework Hadoop to process Big data (Gu, et al., 2014). Hadoop is an efficient open-source framework that allows distributed storage and parallel processing of more massive data sets (Shvachko, Kuang, Radia, Chansler, & others, 2010; Chang, et al., 2008).
Hadoop is an open-source project developed by Doug Cutting and Mike in 2005 (White, 2012). It was a batch processing model, but YARN's introduction (Yet another resource negotiator) in MapReduce v2 made Hadoop more powerful in resource management and job scheduling (Ghemawat, Gobioff, & Leung, 2003). It splits the JobTracker component responsibilities into two daemons: Resource Manager and NodeManager. Hadoop has two components HDFS (Shvachko, Kuang, Radia, Chansler, & others, 2010) and MapReduce (Dean & Ghemawat, 2008).
Each MapReduce application/task has one ApplicationMaster to manage the containers and their status. The ApplicationMaster component negotiates and acquires resources from the NodeManager and ResourceManager to schedule the Map and Reduce tasks. ResourceManager allocates a set of system resources for each container (Ghazi & Gangodkar, 2015), such as CPU cores and RAM are supported. It follows a static method to allocate the resources for a container. For each task, it allocates 2 CPU cores and 4 GB RAM. This static resource allocation method over-allocates the resources for some tasks and keeps the cluster resources underutilized (Guo, Fox, Zhou, & Ruan, Improving resource utilization in mapreduce, 2012). This job scheduling strategy helps the jobs finish early (Xu & Lau, 2014). A scheduled job may or may not use the whole cluster resources and keeps some of the resources idle (Cheng, Rao, Guo, Jiang, & Zhou, 2017 & Bawankule, K. L., Dewang, R. K., & Singh, A. K., 2021). The authors (Sharma & Ganpati, 2015) studied the scheduling algorithms and evaluated their performance in Hadoop YARN on Scheduler Load Simulator (SLS). The proposed article does not test the scheduler in a multi-tenancy environment on mixed workloads to check the resource utilization. The authors (Salman, Husna, Wicaksono, & Ratna, 2018) studied the Fair and Capacity scheduler performance in a multi-tenancy environment with a mixed workload. Still, they did not vary the load condition while testing the scheduler performance and failed to raise the resource utilization issues in Hadoop YARN.