Self-Management of Applications and Systems to Optimize Energy in Data Centers

Self-Management of Applications and Systems to Optimize Energy in Data Centers

Frederico Alvares de Oliveira (ASCOLA Research Team (INRIA-Mines Nantes, LINA), France), Adrien Lèbre (ASCOLA Research Team (INRIA-Mines Nantes, LINA), France), Thomas Ledoux (ASCOLA Research Team (INRIA-Mines Nantes, LINA), France) and Jean-Marc Menaud (ASCOLA Research Team (INRIA-Mines Nantes, LINA), France)
DOI: 10.4018/978-1-4666-1631-8.ch019
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

As a direct consequence of the increasing popularity of cloud computing solutions, data centers are growing amazingly and hence have to urgently face with the energy consumption issue. Available solutions are focused basically on the system layer, by leveraging virtualization technologies to improve energy efficiency. Another body of works relies on cloud computing models and virtualization techniques to scale up/down applications based on their performance metrics. Although those proposals can reduce the energy footprint of applications and by transitivity of cloud infrastructures, they do not consider the internal characteristics of applications to finely define a trade-off between applications Quality of Service and energy footprint. In this paper, the authors propose a self-adaptation approach that considers both application internals and system to reduce the energy footprint in cloud infrastructure. Each application and the infrastructure are equipped with control loops, which allow them to autonomously optimize their executions. The authors implemented the control loops and simulated them in order to show their feasibility. In addition, the chapter shows how the solution fits in federated clouds through a motivating scenario. Finally, it provides some discussion about open issues on models and implementation of the proposal.
Chapter Preview
Top

Introduction

Over the last few years, cloud computing has received a lot of attention in both industry and academia. First, from the application provider point of view, cloud computing permits to precisely request/release resources on the fly. It means that the infrastructure provider can deliver computational and storage resources in a flexible and elastic manner, while charging the application only for what it actually consumes (Mirashe & Kalyankar, 2010). This is particularly interesting for applications that need to cope with a highly variable workload (e.g. web application). Second, from the infrastructure provider point of view, cloud computing has shown to be a very powerful model to mutualize resources and thus face the problem of energy consumption in IT infrastructures. Recent techniques like virtualization enable the provisioning of resources through virtual machines that can be placed in the same physical machine. As a consequence, the workload of several applications can be consolidated in a few physical machines, which allows one to turn off some and hence reduce the energy consumption (Hermenier, Lorca, Menaud, Muller & Lawall, 2009).

However, since the beginning of the popularization of cloud computing platforms, the total energy consumption has grown dramatically (Energy Star, 2007; Koomey, 2007) and it is important to provide resources that applications require and not much to autonomously reduce the energy footprint of applications hosted on cloud infrastructures as much as possible. Concretely, it consists in determining the right trade-off between the Quality of Service (QoS) of applications’ end-users and the resources they consume (Brandic, 2009). The provisioning of additional resources can be non relevant for the profit of the application provider if the renting fees (and by transitivity the energy footprint) are not satisfactory.

Several works have been proposed to manage both applications QoS and the overall energy consumption of the infrastructure through a unique system (Kephart et al., 2007; Nguyen Van, Dang Tran & Menaud, 2010; Wang & Wang, 2010; Petrucci, Loques, & Mossé, 2011). The objective is to maximize applications’ QoS while minimizing the costs due to the infrastructure (e.g. energy consumption). Although it enables to scale up/down applications by querying/releasing resources according to their incoming charge, applications are considered as black boxes. This restrains the adaptation capability of applications in the sense that they are only able to add or remove resources based on performance attributes. From our point of view, it is not sufficient since applications may have specific requirements in terms of reactivity, fault-tolerance and others concerns. The QoS of one application is not only related to performance criteria such as response time but also to internal aspects that may differ from one application to another (Comuzzi & Pernici, 2009). For example, the definition of the QoS of a HomeBanking application may be different than that of a Video-on-Demand application. The QoS of the former may be defined based on details about encryption, whereas the latter may consider aspects like image resolution and encoding characteristics. Since the application internals directly drive the resources requested to the infrastructure provider, it is important to consider the application as a white box.

Complete Chapter List

Search this Book:
Reset