Location and Provisioning Problems in Cloud Computing Networks

Location and Provisioning Problems in Cloud Computing Networks

Federico Larumbe, Brunilde Sansò
Copyright: © 2014 |Pages: 24
DOI: 10.4018/978-1-4666-4522-6.ch002
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter addresses a set of optimization problems that arise in cloud computing regarding the location and resource allocation of the cloud computing entities: the data centers, servers, software components, and virtual machines. The first problem is the location of new data centers and the selection of current ones since those decisions have a major impact on the network efficiency, energy consumption, Capital Expenditures (CAPEX), Operational Expenditures (OPEX), and pollution. The chapter also addresses the Virtual Machine Placement Problem: which server should host which virtual machine. The number of servers used, the cost, and energy consumption depend strongly on those decisions. Network traffic between VMs and users, and between VMs themselves, is also an important factor in the Virtual Machine Placement Problem. The third problem presented in this chapter is the dynamic provisioning of VMs to clusters, or auto scaling, to minimize the cost and energy consumption while satisfying the Service Level Agreements (SLAs). This important feature of cloud computing requires predictive models that precisely anticipate workload dimensions. For each problem, the authors describe and analyze models that have been proposed in the literature and in the industry, explain advantages and disadvantages, and present challenging future research directions.
Chapter Preview
Top

Introduction

The distributed nature of cloud computing implies that the application’s efficiency is inherently related to the network infrastructure. From a user’s smartphone to the data center containing the cloud servers, the infrastructure includes wireless routers, cellular antennas, Optical Cross-Connects (OXCs), optical repeaters, IP routers, traffic load balancers, tablets, laptops, desktop computers, ADSL modems, and cable modems. Software components such as Web browsers, virtual machines, Web services, mail services, cache software, file servers, Hadoop clusters, databases, and search engines are executed at the data centers, and the messages exchanged between these components produce network traffic and server workloads. The actors of the network—users, Internet providers, cloud data center operators, and software providers—enter into Service Level Agreements (SLAs) that specify the desired quality of service.

The extensive use of online applications allows users to have constant access to information. This has the drawback of increasing the number of servers in data centers, energy consumption, and CO2 emissions. In fact, the average data center consumes an equal amount of energy than 25,000 households and data center CO2 emissions are predicted to double by 2020 (Brown et al., 2007; Buyya, Beloglazov, & Abawajy, 2010). Cloud data centers offer multiple potential advantages regarding energy consumption of regular data centers. Virtualization is the key mechanism that allows better server utilization. Consolidating applications in fewer servers can greatly reduce energy consumption. Also, dynamically scaling the number of required VMs may reduce over provisioning.

Another fundamental aspect for cloud applications is the quality of service. Placing applications in data centers may harness the response time, since information must travel through a path of links and routers between the user device and the cloud data center.

In this context, three important optimizations problems will be tackled in this chapter. They have in common the main goal of optimizing energy and providing quality of service. They are the:

  • 1.

    Data Center Location Problem,

  • 2.

    Virtual Machine Placement Problem, and

  • 3.

    Auto Scaling Problem.

The Data Center Location Problem involves the selection of a subset of potential data centers (existing or to deploy) to host the software components. This problem is of fundamental importance because of the distributed nature of cloud computing applications and the impact of the location of the data centers on the end-to-end delay: as cloud applications are closer to the users, the delay experienced is smaller. Cloud providers take that into account by locating data centers in multiple regions and letting users to decide where to locate their applications. A good example of this approach taken to the extreme is given by Akamai, that has more than 1,000 small data centers around the world (Nygren, Sitaraman, & Sun, 2010). Furthermore, the increasing use of data centers requires renewable energy to build an ecologically sustainable system—in 2006, American data centers already consumed 1.5% of the total energy in the US, or the equivalent of 5.8 million households (Brown et al., 2007). Cost is also an aspect of major importance because different locations have different energy and land prices. Delay, CO2 and cost may be concurrent objectives, thus the location of cloud data centers is a challenging planning problem.

Complete Chapter List

Search this Book:
Reset