Energy Efficient Scheduling for Multiple Workflows in Cloud Environment

Energy Efficient Scheduling for Multiple Workflows in Cloud Environment

Ritu Garg (Department of Computer Engineering, National Institute of Technology, Kurukshetra, India) and Neha Shukla (Department of Computer Engineering, National Institute of Technology, Kurukshetra, India)
DOI: 10.4018/IJITWE.2018070102
OnDemand PDF Download:
No Current Special Offers


Cloud computing makes utility computing possible with pay as you go model. It virtualizes the systems by polling and sharing the resources, thus we need to handle more than one workflow at the same time. Workflow is the standard to represent compute intensive applications in scientific and engineering domain. Hence, in this article, the authors presented the scheduling heuristic for multiple workflows running parallel in the cloud environment with the aim to reduce the energy consumption as it is one of the major concerns of cloud data centers along with the execution performance. In the proposed approach, first clustering is performed to minimize the energy consumption and execution time during communication corresponding to precedence constraint tasks. Then cluster are scheduled is on the best available energy efficient resources. Finally, DVFS is applied in order to reduce energy consumption further when the nodes are in the idle and communication stage. The simulation has been performed on CloudSim and the results show the reduction in energy consumption by up to 42%.
Article Preview

1. Introduction

Cloud computing enables users to host diverse applications on cloud by providing high computing capabilities, virtualization and scaling (Topcuoglu, Hariri, & Wu, 2002). As a result, large scale businesses, scientific and engineering applications which are in the form of multiple workflows (Cao, Jarvis, Saini, & Nudd, 2003) are to be scheduled simultaneously. Most of the studies in literature (Topcuoglu, Hariri, & Wu, 2002; Cao et al., 2003; Yu & Buyya, 2005; Dorronsoro et al., 2014) considered scheduling for a single workflow, whereas there is strong need to handle multiple workflows at the same time. Realization of such applications requires large scale data center which consumes large amount of electric power. Reduction of energy consumption in data center is one of the major issues, as it not only leads to high electric bills but also adds to high carbon emission, cooling cost and reduces reliability of the system. According to a survey (Greenberg et al., 2008), a data center with 50,000 computing nodes consumes more than 100 million kW per year, and this consumption is increasing exponentially every year. On the contrary, average CPU utilization lies between 10% - 50%; as a result, the power usage effectiveness of data center is underperformed. In literature, the scheduling of workflow applications (Taylor, Deelman, Gannon, & Shields, 2014; Bittencourt & Madeira, 2007; Braun et al., 2001) only focuses on reduction of makespan without worrying about the amount of energy consumption. However, in this paper, we consider energy efficient scheduling of multiple workflows along with fairness in execution of workflows.

In this work, our main objective is to schedule multiple workflows on the available computing resources while reducing the energy consumption and execution time of workflow applications. The proposed algorithm works in three phases: Clustering and priority distribution, Cluster scheduling, Voltage and frequency scaling. In the first phase, we used clustering in order to reduce the communication energy and also the execution time of the workflow, as all the jobs without any interleaved communication are placed on one processor. Then in the second phase we are selecting the energy efficient processor for scheduling the clusters formed in the previous phase. Finally, in the third phase we are applying the popularly used Dynamic Voltage/Frequency Scaling (DVFS) (Zhu, Melhem, & Childers, 2003) technique which varies the voltage and frequency of the processor with the aim to reduce energy consumption further during idle and communication phase.

We have also taken care of the fairness in execution of workflows as multiple workflows running on the same distributed environment can have multiple conflicts such as order of execution of the workflows (Bittencourt & Madeira, 2008). Workflow executed first will have more advantage in terms of minimum makespan than the workflow executed next. Therefore, while scheduling multiple workflows simultaneously we should guarantee fairness while allocating resources among arriving workflows. For the same purpose, we have considered two different approaches: Group workflows scheduling, Sequential workflows scheduling. In case of Group workflows scheduling, all the DAGs are considered simultaneously by merging them into single DAG by connecting all the DAGs starting node to single dummy node (representing start node of all DAGs) with zero computation and communication cost, similarly all the end nodes of different DAGs are connected to a single dummy node (representing exit node for all DAGs) with zero computation and communication cost. In the second approach of Sequential workflows scheduling, DAGs are considered in the order of their arrival i.e. in sequence one after another. Both the techniques have been independently used to schedule the multiple workflows application. The major contributions of this paper are:

Complete Article List

Search this Journal:
Open Access Articles: Forthcoming
Volume 16: 4 Issues (2021): Forthcoming, Available for Pre-Order
Volume 15: 4 Issues (2020)
Volume 14: 4 Issues (2019)
Volume 13: 4 Issues (2018)
Volume 12: 4 Issues (2017)
Volume 11: 4 Issues (2016)
Volume 10: 4 Issues (2015)
Volume 9: 4 Issues (2014)
Volume 8: 4 Issues (2013)
Volume 7: 4 Issues (2012)
Volume 6: 4 Issues (2011)
Volume 5: 4 Issues (2010)
Volume 4: 4 Issues (2009)
Volume 3: 4 Issues (2008)
Volume 2: 4 Issues (2007)
Volume 1: 4 Issues (2006)
View Complete Journal Contents Listing