Proportional Allocation of Resources on Shared Ring Buffer for Virtualization

Proportional Allocation of Resources on Shared Ring Buffer for Virtualization

Wenzhi Cao, Hai Jin, Xia Xie
Copyright: © 2012 |Pages: 19
DOI: 10.4018/ijcac.2012040102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In virtualization, without proper scheduling and resource management, a load surge in one VM may unacceptably degrade the performance of another. Key requirement of IO performance virtualization is performance isolation. The current state of performance isolation for virtualization is more rudimentary. This paper presents a resources allocation framework, based on abstracted Xen IO model, that provides both throughput and fairness guarantees for network and storage device via 2-level resource control. The high-level tier makes guest domains perceive the states of the resource through resource control on sharing ring buffer. The high-level tier uses public/private token-bucket to support work-conserving and fairly distributes any spare bandwidth to the different VMs. The low-level tier is intended to meet the fairness guarantee, and computes token quantity value of each bucket by feedback-driven scheduler. The authors analytically and experimentally demonstrated that their framework is work-conserving, and achieves fairness, and high adaptability.
Article Preview
Top

1. Introduction

In virtualization environment, multiplexing virtual machines (VMs) onto shared infrastructure can help to improve resource efficiency and flexibility. Resource sharing generates demands to provide each VM with an illusion of owning dedicated physical resources in terms of CPU, memory, network and storage IO. Without proper scheduling and resource management, a load surge in one VM may unacceptably degrade the performance of another. Thus, strong isolation is needed for successful consolidation of VMs with diverse requirements on a shared infrastructure.

However, the current state of the art in terms of performance isolation for virtualization is much more rudimentary. Current researchers often focus on limiting the amount of resources allocated to each VM. In Barham et al. (2003), Xen limits VM network traffic based on credit. Since it is a static resource allocation, VM cannot use spare bandwidth if resources access to upper limitation. This is a kind of resource waste, not work-conserving. Some research works (Gupta et al., 2006) change CPU allocation of VM so as to charge IO resource allocation. But these methods are coarse-grained, which bind CPU resource, network and storage resource together resulting in effecting each other. All of these, which are designed to support performance isolation among VMs based on resource limitation, hardly guarantees the fairness of VMs although the amount of resources allocated to each VM is limited. VMs generate requests respectively, and then the driver domain dispatches requests from each VM in first-in-first-out (FIFO) order, that is not based on weight of VMs. To provide fairness guarantee, some other research works focus on scheduling, such as VIOS (Seelam & Teller, 2007). In general, IO performance isolation in virtualization needs: (i) to allocate resources dynamically, for work-conserving; (ii) to provide guarantee fairness of each VM, and ensure each VM get enough resources, e.g. bandwidth.

On the other hand, IO performance isolation in virtualization environment introduces many new challenges, compared with managing other computing environments. First, the virtualization refers to the abstraction of resources. The guest domain never is aware of the actual characteristics of the resources. Instead, it gets a virtual view of the resources. Second, special IO model is used in virtualization technology. In Xen, both driver domain and the split driver deal with the IO requests of VMs. It is the same with network, storage and etc. Few current research works on performance isolation consider these factors in virtualized environments.

Based on above, we regard the physical device driver as black box and then abstract Xen IO model. Moreover, we allocate resource on shared ring buffer, for fine-grained controlling IO resource. We present an interposed 2-level scheduling framework based on the Xen IO model. Our framework can ensure both throughput and fairness guarantees in virtualization environment for different VMs. The high-level tier makes guest domains be aware of the actual characteristics of the resource through states of the ring buffer. Besides, it uses leaky bucket mechanism for bandwidth controller, to regulate the stream of requests so that they are insulated from each other. The high-level tier adds public bucket to provide work-conserving, and tries to fairly distribute any spare bandwidth to the different users. The low-level tier has two functions: one is to provide fairness guarantee; the other is to support high-level tier by feedback mechanism. It uses the fair queuing scheduler, i.e., the SFQ algorithm, to schedule requests based on weight of VMs, and then uses feedback from monitoring the underlying system to regulate requests. This mechanism is rigorous and based on a careful combination of statistical evaluation of the system and fair queuing theory, which increases its flexibility and adaptability and sets it apart from other performance isolation method in the virtualized environment.

The remainder of the paper is organized as follows. In Section 2 we introduce fair queuing and proportional sharing mechanism, and summarize previous work in virtualization and native OS. Section 3 abstracts Xen IO model, and Section 4 gives overview of our framework, together with implementation details of its two levels. Section 5 discusses quality guarantee of our framework. Section 6 presents results from our evaluation. Finally, Section 7 summarizes the contributions and outlines directions for future work.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024)
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing