Performance of Memory Virtualization Using Global Memory Resource Balancing

Performance of Memory Virtualization Using Global Memory Resource Balancing

Pvss Gangadhar, Ashok Kumar Hota, Mandapati Venkateswara Rao, Vedula Venkateswara Rao
Copyright: © 2019 |Pages: 17
DOI: 10.4018/IJCAC.2019010102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Virtualization has become a universal generalization layer in contemporary data centers. By multiplexing hardware resources into multiple virtual machines and facilitating several operating systems to run on the same physical platform at the same time, it can effectively decrease power consumption and building size or improve security by isolating virtual machines. In a virtualized system, memory resource supervision acts as a decisive task in achieving high resource employment and performance. Insufficient memory allocation to a virtual machine will degrade its performance drastically. On the contrasting, over allocation reasons ravage of memory resources. In the meantime, a virtual machine's memory stipulates may differ drastically. As a consequence, effective memory resource management calls for a dynamic memory balancer, which, preferably, can alter memory allocation in a timely mode for each virtual machine-based on their present memory stipulate and therefore realize the preeminent memory utilization and the best possible overall performance. Migrating operating system instances across discrete physical hosts is a helpful tool for administrators of data centers and clusters: It permits a clean separation among hardware and software, and make easy fault management. In order to approximate the memory, the stipulate of each virtual machine and to adjudicate probable memory resource disagreement, an extensively planned approach is to build an Least Recently Used based miss ratio curve which provides not only the current working set size but also the correlation between performance and the target memory allocation size. In this paper, the authors initially present a low overhead LRU-based memory demand tracking scheme, which includes three orthogonal optimizations: AVL based Least Recently Used association, dynamic hot set sizing. This assessment outcome confirms that, for the complete SPEC CPU 2006 benchmark set, subsequent to pertaining the 3 optimizing techniques, the mean overhead of MRC construction are lowered from 173% to only 2%. Based on current WSS, the authors then predict its trend in the near future and take different tactics for different forecast results. When there is an adequate amount of physical memory on the host, it locally balances its memory resource for the VMs. Once the local memory resource is insufficient and the memory pressure is predicted to sustain for a sufficiently long time, VM live migration, is used to move one or more VMs from the hot host to other host(s). Finally, for transient memory pressure, a remote cache is used to alleviate the temporary performance penalty. These experimental results show that this design achieves 49% center-wide speedup.
Article Preview
Top

1. Introduction

Virtualization is becoming persistent in massive data centers, cloud computing, and enterprise infrastructure, motivated by a number of significant benefits, such as theatrical cost reduction, enlarged application availability and further well-organized IT management. According to Gartner, today, 25% of installed server workloads are virtualized. IDC even forecasts that, by 2014, more than 70% of applications on newly distributed servers will run in virtual machines. However, in a virtualized environment, competent and effectual memory resource management is silent a demanding problem. In this paper we recommend a memory resource balancing method to develop performance and memory resource consumption for center-wide virtualized computing. We show that our elucidation can correctly monitor memory demand of each virtual machine with very low operating cost and can successfully improve overall system performance. Virtualization technologies like Xen (Anselmi, Amaldi, & Cremonesi, 2008), VMware (Tam, Azimi, Soares, & Stumm, 2009), and Denali (Yang, Hertz, Berger, Kaplan, & Moss, 2004) have turn into a common generalization layer in contemporary data centers. They facilitate multiple operating systems to run on their own virtual machines separately. Figure 1 illustrates an example, where the hypervisor multiplexes the hardware of a single physical machine with several virtual machines and a guest Operating System executes inside each virtual machine separately. One of the major benefits of using virtualization is server consolidation. It is not unusual to achieve a 15-to-1 or even higher consolidation ratio (Moltó, Caballer, Romero, & de Alfonso, 2014). For a data center that hosts a large number of servers, this can successfully save power consumption, floor space possession and air conditioning costs. In addition, virtualization can advance ease of use by live migration (Barham, Dragovic, Fraser et al., 2003). When one physical server falls short or wants maintenance, the virtual machines it hosts can be clearly migrated to another physical machine with insignificant application downtime. The core of virtualization is the virtual machine monitor which is also called hypervisor. VMM is accountable for building and organization multiple instances of virtual hardware platforms. A bundle of physical resources like CPUs or network interface cards can be multiplexed in a time-sharing manner. However, the memory system is shared all the way through address space partitioning. That is, each virtual machine is allocated with a fixed amount of address space of physical memory. However, conflicting from how a resident Operating System administers virtual memory and physical memory for its processes, for the purpose of fidelity, the VMM is not actively involved in memory management of each Virtual Machine. More particularly, when created, each VM is allocated with a fixed amount of physical memory. Then, it is the guest Operating System’s job to supervise that amount of physical memory without the involvement of the hypervisor. As a result, the hypervisor is unaware of memory demand of VMs and powerless to dynamically balance memory resources. In our solution, we first design a low cost but accurate Least Recently Used based working set size tracking scheme as the basis of memory resource balancing. The Least Recently Used based working set size form associates memory allocation size and performance impact. Based on the form, we propose a local memory balancing method, which dynamically amend memory allocation amount via ballooning (Anselmi, Amaldi, & Cremonesi, 2008; Sapuntzakis, Chandra, Pfaff, Chow, Lam & Rosenblum, 2002) on a single physical machine. Then it is unmitigated to global surroundings, where the physical memory of all interconnected machines is balanced via live migration and remote caching. To the greatest of our knowledge, our work uniquely coordinates the global memory balancing practices with a local balancing scheme. Figure 2 shows the overview of our solution.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024)
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing