Performance Management on Cloud Using Multi-Priority Queuing Systems

Performance Management on Cloud Using Multi-Priority Queuing Systems

A. Madankan, A. Delavar Khalfi
Copyright: © 2015 |Pages: 16
DOI: 10.4018/978-1-4666-8676-2.ch015
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Cloud computing is known as a new trend for computing resource provision. The process of entering into the cloud is formed as queue, so that each user has to wait until the current user is being served. In this model, the web applications are modeled as queues and the virtual machines are modeled as service centers. M/M/K model is used for multiple priority and multiple server systems with preemptive priorities. To achieve that it distinguish two groups of priority classes that each classes includes multiple items, each having their own arrival and service rate. It derives an approximate method to estimate the steady state probabilities. Based on these probabilities, it can derives approximations for a wide range of relevant performance characteristics, such as the expected postponement time for each item class and the first and second moment of the number of items of a certain type in the system.
Chapter Preview
Top

1. Introduction

Cloud computing has been an emerging technology for provisioning computing resource and providing infrastructure of web applications in recent years. Cloud computing greatly lowers the threshold for deploying and maintaining web applications since it provides infrastructure as a service (IaaS) and platform as a service (PaaS) for web applications. Consequently, a number of web applications, particularly the web applications of medium and small enterprises, have been built into a cloud environment. Meanwhile, leading IT companies have established public commercial clouds as a new kind of investment. For example, Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers. Google App Engine enables enterprises to build and host web applications on the same systems that power Google applications. App Engine offers fast development and deployment; simple administration, with no need to worry about hardware, patches or backups; and effortless scalability. IBM also provides cloud options. Whether you choose to build private clouds, use the IBM cloud, or create a hybrid cloud that includes both, these secure workload solutions provide superior service management and new choices for deployment. We even can establish a private cloud with Ubuntu Enterprise Cloud to offer immediacy and elasticity in the infrastructure of web applications. In summary, both of the numbers of cloud applications and providers have kept gradually increasing for a couple of years. As a result, computing resource scheduling and performance managing have been ones of the most important aspects of clouding computing.

Since there is not any standard model has been widely accepted by industry yet, scaling up and down is an open issue for researchers. The cloud providers, such as Amazon, IBM, and Google have their own mechanisms which are commercial ones and inherited from their existing proprietary technology. The researchers from universities and institutes also have proposed some models and methods. For example, in (K. Xiong and H. Perros, 2009), the author introduces many outcomes on predicting system performance based on machine learning obtained in RAD lab of University of California at Berkeley. The existing solutions to scaling up and down are designed via various techniques, such as statistical methods, machine learning, and queuing theory.

Aware of the advantages and disadvantages of these solutions, we propose a queuing-based model for performance management on cloud. In this chapter, we show how the queuing model, M/M/K model is used for multiple priority and multiple server systems with preemptive priorities. To achieve that we distinguish two groups of priority classes that each classes includes multiple items, each having their own arrival and service rate. We derive an approximate method to estimate the steady state probabilities with an approximation error that can be made as small as desired at the expense of some more numerical matrix iterations. Based on these probabilities, we can derive approximations for a wide range of relevant performance characteristics, such as the expected postponement time for each item class and the first and second moment of the number of items of a certain type in the system.

A cloud computing is generally able to handle multiple incoming jobs priority, that we classify as either high priority or low priority job. Each job has its own arrival rate and service time distribution. As a consequence, we need to model cloud computing by a (multi-server) priority queuing system with two priority classes, where each class consists of multiple subclasses (item types). We develop our own algorithm in this chapter, assuming Poisson arrivals and exponential service times.

Complete Chapter List

Search this Book:
Reset