An Adaptive Push-Pull for Disseminating Dynamic Workload and Virtual Machine Live Migration in Cloud Computing

An Adaptive Push-Pull for Disseminating Dynamic Workload and Virtual Machine Live Migration in Cloud Computing

K. Jairam Naik
Copyright: © 2022 |Pages: 25
DOI: 10.4018/IJGHPC.301591
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Adapting a dynamic load dissemination system for distributed computing pasture has become a hot spot problem of current research. In the instance of overloaded VM’s or node failure, the associated resources face difficult to determine which VM should be selected for load exchanging and/or how many VM’s should migrate to manage load imbalance. This work, introduces a Hierarchical Adaptive Push-Pull system for disseminating dynamic workload and live migration of VM’s among resources in the Cloud. Adhering to the Adaptive Push-Pull, Cloud Resources frequently pull’s the workload or through VM Managers based on load dynamics. In contrast, status information pertaining to Cloud Resources maintained by the Cloud Resource Managers that possess push capability to push the workload only to those VM’s which are capable enough to receive additional load. These two practices contain balancing possessions through efficient load management complications and simulation result addresses reduced load deviation and scalable resources utilization.
Article Preview
Top

1. Introduction

Virtualization (P. Barham et al., 2003) technology is one among the key significant techniques in computational cloud systems that enable much supremacy like heterogeneous hardware abstraction, management conveniences, security isolation, and etc. Live migration of virtual machine (C. Clark et al., 2005, M. Nelson et al., 2005) is the key characteristic of the virtualization technology, which refers to the moving of VM running on one physical host to another physical host. Due to the use of load balancing (P. Padala et al., 2007, N. Bobroff et al., 2007) in data centers, Live VM migration has became a powerful technique for system faults tolerance (B. Cully et al. (2008), A.B. Nagarajan et al., 2007), online preservation (Z. Zheng et al., 2013), and managing the power consumption (R. Nathuji et al., 2007), etc.

Within a data center, all the nodes including the source and targets machines (servers) identified for participation in migration often share the same storage space along with the status information of other virtual devices like, the virtual CPU’s for which state information need to be transferred. This can be made it possible through the Storage Area Network (SAN), a dedicated and high speed network of storage devices and servers. The prevailing approach for VM migration is the Pre-copy (C. Clark et al., 2005, M. Nelson et al., 2005), in which the memory content was sent through several sequential iterations from the source node to the target node. At the nth iteration of pre-copy migration, transmission of only the pages that are written dirty in the (n−1)th iteration will take place. Therefore, by applying the concept of pre-copy tries to obtain the goal of reduced downtime. During this, the VM will be suspended temporarily.

For the cloud organizations, deployment of networked virtualization techniques confers service level feedbacks to the infrastructure providers. The individual physical servers of the cloud allow multiple VM’s could be incorporated and multiple application processes could be held by each VM’s. The servers could be effectively utilized by addressing considerable amount of reduction in load deviation, makespan time, failure tendency and overall energy consumption of the system. An improved point in successful execution rate of tasks by service multiplexing and added resource usage is the result of these VM based methods.

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024)
Volume 15: 2 Issues (2023)
Volume 14: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 13: 4 Issues (2021)
Volume 12: 4 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing