Energy Efficient Level by Level Scheduling for Multiple Workflows in Cloud

Energy Efficient Level by Level Scheduling for Multiple Workflows in Cloud

Ritu Garg (National Institute of Technology, Kurukshetra, India) and Neha Shukla (Department of Computer Engineering, National Institute of Technology, Kurukshetra, India)
Copyright: © 2019 |Pages: 16
DOI: 10.4018/IJSI.2019070106


Cloud computing provides paradigms for hosting a large number of services by providing configurable computing resources on demand, causing multiple applications to be hosted simultaneously in the form of multiple workflows which requires large scale data centers. The reduction in energy consumption is one of the major concerns of large-scale data centers. In this article, the authors develop a multiple workflow scheduling heuristic with the aim to reduce energy consumption along with the execution time. In the proposed approach, first tasks are considered level by level following the precedence constraints as a result of the tasks multiple DAGs could run in parallel. The tasks at each level are considered in the order of their ranking and scheduled on the efficient processor based on their estimated finish time where idle slots of resources are used to reduce the overall makespan and energy consumption. Finally, DVFS is applied during idle slots and the communication phase to reduce energy consumption further by scaling the frequency to appropriate level.
Article Preview

1. Introduction

Cloud computing facilitates end users by providing virtualized services in a parallel and distributed system. User avail these services on pay-as-you-go model in form of an infrastructure, platform and software as service (Zomaya, Albert, & Young Choon Lee, 2012). As cloud provides dynamic provisioning of resources in order to satisfy the variation in demand, it enables users to host pervasive applications from different domains such as e-business, scientific and consumers; execution of such applications is represented in form of workflows (Junwei et al., 2003). Scheduling of multiple workflows in parallel is one the major challenge, which deals with mapping tasks corresponding to multiple workflows on the available resources while maintaining the precedence constraints and guaranteeing fairness in execution of multiple workflows.

Nowadays, deployment of large-scale datacenters is increasing tremendously to fulfill the demands of the users which lead to huge amount of energy consumption. According to Koomey’s report (Koomey, 2011), datacenters consumes nearly 2% of the total global energy with poor power usage effectiveness. The high energy consumption in data centers not only contributes high electricity bills but it also reduces system reliability and its availability. Additionally, high energy consumption causes huge carbon dioxide emission leading to global warming (Greenberg, 2008), it also adds to other operation cost such as cooling cost and maintenance cost. Hence, reduction in energy consumption in cloud computing is one of the major concerns. Thus, we proposed an efficient scheduling algorithm for multiple workflows with the aim to minimize energy consumption along with execution time.

In this paper, we deal with the problem of scheduling multiple workflows. In case of scheduling multiple workflows, an obvious solution is to schedule them in a sequence one after the other. The problem with this solution is that different workflows (as per their structure) may leave the resources idle, thereby results in the very large overall makespan and energy consumption. Further, in this solution makespan of different workflows will be affected a lot depending upon the sequence in which they are considered. Hence, we need to have an efficient scheduling algorithm that shares the resources equally and guaranteeing fairness in execution among the arriving workflows. In the proposed approach we are considering all the workflows simultaneously to ensure fairness by merging all the workflows into one. For the same purpose, we are adding single entry and exit node with zero computation and communication cost and connecting it to start and end nodes of each workflow. Further, we are utilizing the idle slots of resources corresponding to one workflow (small) by other workflows (large) and applying DVFS in order to minimize the overall makespan and energy consumption. The proposed algorithm works in three phases: Level distribution phase, Task scheduling phase and Voltage/frequency scaling phase. In level distribution phase, tasks corresponding to multiple workflows are divided into levels on the basis of their precedence constraints as a result each level contains independent tasks, which can be scheduled parallelly in the order of their ranking. In the next phase, tasks of each level corresponding to multiple workflows are scheduled according to their earliest finish time by utilizing the idle slots of the resources which results in the minimization of overall makespan and energy consumption of multiple DAGs. Finally, in the third phase, Dynamic Voltage/Frequency Scaling (DVFS) (Dakai, Melhem, & Childers, 2003) technique is applied to reduce the processors frequency to the appropriate level during idle or communication phase which reduces the energy consumption further.

Complete Article List

Search this Journal:
Open Access Articles: Forthcoming
Volume 8: 4 Issues (2020): 2 Released, 2 Forthcoming
Volume 7: 4 Issues (2019)
Volume 6: 4 Issues (2018)
Volume 5: 4 Issues (2017)
Volume 4: 4 Issues (2016)
Volume 3: 4 Issues (2015)
Volume 2: 4 Issues (2014)
Volume 1: 4 Issues (2013)
View Complete Journal Contents Listing