Power-Aware Mechanism for Scheduling Scientific Workflows in Cloud Environment

Power-Aware Mechanism for Scheduling Scientific Workflows in Cloud Environment

Kirankumar V. Kataraki, Sumana Maradithaya
Copyright: © 2021 |Pages: 17
DOI: 10.4018/IJISMD.2021010102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Cloud computing is a platform that hosts various services and applications for users and businesses to access computing as a service. Cloud provider offers two distinct types of plans: reserved service and on-demand service. Cloud resources need to be allocated efficiently, and task needs to be scheduled efficiently such that the performance can be enhanced. In this research work, the authors have proposed a novel mechanism named PAMP (performance aware mechanism for parallel computation) for scheduling scientific workflows. At first, the resources are allocated using the optimal resource allocation mechanism. Then tasks are scheduled in parallel utilizing the task scheduling algorithm. Further, they consider energy and time as constrained to makespan optimization. The evaluation is carried out by considering the scientific workflows cyber snake with its different variant, and the comparative analysis is carried out by varying the number of virtual machines. The proposed methodology outperforms the existing model.
Article Preview
Top

Introduction

Recent developments in technologies and experimental methods have allowed researchers in to generate vast amounts of data in record time(Singh, et al. 2019). The demand for computing power and storage is met with an increase in the provision of infrastructure. It has resulted in the popularity of new utility computing infrastructures, like grid and cloud computing (Nayyar, et al., 2011). The computing systems provide data storage and powerful computing, and they have been widely utilized for scientific workflows. Recently, cloud-computing has been developed as an efficient and effective way to accomplish resource provisioning. Because of the centralized-management of the infrastructure, consumers can access on-demand resources and charge in as “pay-as-you-go” (Dutta, et al., 2012; Moss, 2016). Based on the elasticity of cloud infrastructure, an increasing number of consumers choose a cloud provider to organize scientific workflows and business applications. Many scientific applications can be demonstrated by the help of workflows in several domains consisting of physics, astronomy, bioinformatics, astrophysics, and many more. Cloud-computing is defined as a service-based and infrastructure based service oriented computing model, that gives consumers on-demand computing abilities. To use the modern virtualization model which helps dynamic resource-allocation and job allocation, cloud service consumers need to submit their request to the cloud-computing system. Figure 1 depicts the concept of hypervisor-based virtualization. Based on consumer requests, the cloud infrastructure automatically manages the required hardware resources. For cloud service infrastructure, the critical issue is how to optimize resource allocation on schedule applications and hardware infrastructure in such a way that the cost of processing can be reduced (Nayyar, 2019).

Scientific-workflows comprises numerous tasks. Thereby they need a more significant number of computing resources at the time of execution. Fortunately, those computing resources are provisioned with the help of cloud infrastructure—tasks contained in the scientific workflows that have communications and dependencies. Therefore, the system management of the cloud requires the assignment of resource for the executions of the scientific workflow. The computing resources are given in the form of virtual machines within the platform of a cloud (Orgerie, et al., 2014).

Usually, the virtual machines are mentioned in different applications that are measured by many configuration-parameters consisting amount of memory, number of CPU-cores, and the capacity of the disk. As the execution of scientific-workflow in the platform of cloud incurs more significant energy consumption, it is very important to deploy the virtual machines in energy-efficient ways. Therefore, the energy consumption of the cloud-platform has gained much attention throughout the entire world. In the cloud data center, a vast amount of energy can be consumed in running the servers, console, cooling system, monitors, cool fans, network peripherals and processors (Beloglazov et al., 2011). It is essential to deploy the execution of the scientific workflow in a way of energy-aware within the cloud platform.

Figure 1.

Concept of hypervisor-based virtualization

IJISMD.2021010102.f01

Power utility minimization in any computing model can be achieved by using power-aware mechanism effectively. A simplified mechanism generally referred to as dynamic voltage and frequency scaling (DVFS), dynamically tunes the energy-delay tradeoff. The adequate allotment of resources is crucial to improve the performance of cloud information centers. Therefore, to boost the performance and resolve these problems in recent times various methodologies such as constrained earliest-finish time and dynamic voltage frequency scaling are used by many researchers. DVFS may be a highly established energy consumption optimization for embedded cloud systems, and energy optimization may be realized by lowering the voltage dynamically.

The communication cost of inter processors is incredibly high in the existing methodologies. Thus energy and run-time optimization technique are introduced to decrease energy consumption by the scheduling of varied task-loads on many different embedded systems (Khan, 2012). In addition, different power proficient state-of-the-art-techniques and fundamental necessities of embedded systems are displayed to extend the execution of embedded computing (Singh, et al., 2015).

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 8 Issues (2022): 7 Released, 1 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing