Optimal Resource Provisioning in Federated-Cloud Environments

Optimal Resource Provisioning in Federated-Cloud Environments

Veena Goswami (KIIT University, India) and Choudhury N. Sahoo (KIIT University, India)
Copyright: © 2015 |Pages: 18
DOI: 10.4018/978-1-4666-8676-2.ch007
OnDemand PDF Download:
No Current Special Offers


Cloud computing has emerged as a new paradigm for accessing distributed computing resources such as infrastructure, hardware platform, and software applications on-demand over the internet as services. Multiple Clouds can collaborate in order to integrate different service-models or service providers for end-to-end-requirements. Intercloud Federation and Service delegation models are part of Multi-Cloud environment where the broader target is to achieve infinite pool of resources. This chapter presents an optimal resource management framework for Federated-cloud environments. Each service model caters to specific type of requirements and there are already number of players with own customized products/services offered. They propose an analytical queueing network model to improve the efficiency of the system. Numerical results indicate that the proposed provisioning technique detects changes in arrival pattern, resource demands that occur over time and allocates multiple virtualized IT resources accordingly to achieve application QoS targets.
Chapter Preview

1. Introduction

Cloud computing is a general term for system architectures that involves delivering hosted services over the Internet. Cloud computing services are offered on a pay-as-you-go basis and assure considerable reduction in hardware and software investment costs, as well as energy costs. These services are broadly divided into three categories: Infrastructure-as-a-Service (IaaS), which includes hardware, storage, servers, and networking components are made accessible over the Internet; Platforms-a-Service (PaaS), which includes computing platforms — hardware with operating systems, virtualized servers, and the like; and Software-as-a-Service (SaaS), which includes software applications and other hosted services. A cloud service differs from traditional hosting in three principal aspects. First, it is provided on demand, typically by the minute or the hour; second, it is elastic since the user can have as much or as little of a service as they want at any given time; and third, the service is fully managed by the provider (Brunette and Mogull, 2009; Mell and Grance, 2009; Vaquero et al., 2009).

Large service centers have been set up to provide comprehensive services by sharing the IT resources to clients. Companies often outsource their IT infrastructure to third party service providers to reduce the management cost. This extends to the efficient use of resources and a step-down of the operating costs. The service providers and their clients often negotiate utility based Service Level Agreements (SLAs) to manage its resources to maximize its profits. (Ardagna et al., 2005) proposed a Service level agreements (SLA) based profit optimization in multi-tier systems. Utility based optimization approaches provides, load balancing and obtain the best trade-off between job classes for Quality of Service levels.

Efficiently managing cloud resources and maintaining Service level Agreements for cloud services is an enormous challenge. Performance virtualization techniques have been employed to provide effective performance of computer service subject to QoS metrics such as response time, throughput, and network utilization, have been extensively studied in the (Slothouber, 1995; Karlapudi and Martin, 2004; Lu and Wang, 2005). Web server performance model using an open queueing network was employed to model the behavior of Web servers on the Internet (Slothouber, 1995). (Karlapudi and Martin, 2004) studied a Web application tool for the performance prediction of Web applications between specified end-points. Cloud centers as the enabling platform for dynamic and flexible application provisioning is facilitated by exhibiting data center’s capabilities as a network of virtual services. Hence, users are able to access and deploy applications from any place in the Internet driven by the demand and Quality of Service (QoS) requirements (Buyya et al., 2009). IT companies are freed from the trivial task of setting up basic hardware and software infrastructures by using clouds as the application hosting platform. Thus, they can focus more on innovation and creation of business values for their application services (Armbrust et al., 2010). An optimal resource management framework for multi-cloud computing environment has been presented in (Goswami and Sahoo, 2013).

This chapter focuses on an analytical model through which Quality of service (QoS) is ensured by obtaining important performance indicators such as mean request response time, blocking probability, probability of immediate service and probability distribution of number of tasks in the system. This model allows cloud operators to tune the parameters such as the number of servers on one side, and the values of blocking probability and probability that a task request will obtain immediate service, on the other. Successful provision of cloud services and, consequently, widespread adoption of cloud computing necessitates accurate performance evaluation that allows service providers to dimension their resources in order to fulfill the Service Level Agreements with their customers.

Complete Chapter List

Search this Book: