Energy Efficient, Resource-Aware, Prediction Based VM Provisioning Approach for Cloud Environment

Energy Efficient, Resource-Aware, Prediction Based VM Provisioning Approach for Cloud Environment

Akkrabani Bharani Pradeep Kumar (GITAM University, Visakhapatnam, India) and P. Venkata Nageswara Rao (GITAM University, Visakhapatnam, India)
Copyright: © 2020 |Pages: 20
DOI: 10.4018/IJACI.2020070102
OnDemand PDF Download:
Available
$29.50
No Current Special Offers
TOTAL SAVINGS: $29.50

Abstract

Over the past few decades, computing environments have progressed from a single-user milieu to highly parallel supercomputing environments, network of workstations (NoWs) and distributed systems, to more recently popular systems like grids and clouds. Due to its great advantage of providing large computational capacity at low costs, cloud infrastructures can be employed as a very effective tool, but due to its dynamic nature and heterogeneity, cloud resources consuming enormous amount of electrical power and energy consumption control becomes a major issue in cloud datacenters. This article proposes a comprehensive prediction-based virtual machine management approach that aims to reduce energy consumption by reducing active physical servers in cloud data centers. The proposed model focuses on three key aspects of resource management namely, prediction-based delay provisioning; prediction-based migration, and resource-aware live migration. The comprehensive model minimizes energy consumption without violating the service level agreement and provides the required quality of service. The experiments to validate the efficacy of the proposed model are carried out on a simulated environment, with varying server and user applications and parameter sizes.
Article Preview
Top

1. Introduction

In recent times Cloud has emerged as a promising archetype for high performance or high throughput computing because of the enormous development of powerful computers and high-speed network technologies as well as low cost. Cloud computing aims to combined heterogeneous, large-scale, and multiple-institutional resources, and to provide the transparent, secure, and coordinated access to various computing resources (supercomputer, cluster, scientific instruments, database, storage, etc.) owned by multiple institutions by making virtual organization (Lawey et al., 2014). On the other hand, the aim of cloud is to provide scalability and reliability; however, clouds in essence aim to deliver more economical solutions to consumers as well as providers. Form economic point of view consumers only need to pay for what resources they need whereas cloud providers maximization of profit is a high priority by capitalize poorly utilized resources. Profit is directly proportional to the maximum utilization of cloud resources and minimization of resources expenditure. Efficient energy consumption technique in cloud data center can plays a crucial role. Moreover, energy consumption can be much reduced by introducing an efficient provisioning approach which will increase resource utilization in order to minimize energy consumption.

The scope of green cloud computing is not only limited to main computing components like processors, storage devices and visualization facilities, but also it can expand into a much larger range of resources associated with computing facilities including auxiliary equipments, water used for cooling and even physical/floor space that these resources occupy (Green, 2010). A recent study on energy consumption of server farms (Koomey, 2007) shows that electricity use for servers worldwide including their associated cooling and auxiliary equipment in 2005 cost US$ 7.2 bn. The study also indicates that power consumption in that year had doubled as compared with consumption in 2020. The previous studies present the energy consumption by these cloud data centers in 2010 is around 201.8TWh in the whole world and this consumed energy is approximately 1.1% to 1.3% of entire world’s energy consumption (Gao et al., 2012). As there is rapid growth in establishing cloud data centers (Green 2010) it is expected to increase the cloud data center energy consumption up to 8% of the entire world by 2020 (Koomey 2011).

As per the current studies, at present only 11% to 50% of total resources are utilized in most of times (Dasgupta et al., 2011), whereas these used data centers consume around 75% to 90% and unused data centers also consume energy up to 50% (Greenberg et al., 2008). Therefore, virtual machine placement plays an important role in energy consumption. In this regard there have been some efforts to reduce energy consumption in cloud data center by placing VMs of underutilized servers onto moderately utilized servers and either shutdown these servers or put these servers into sleep mode (Meisner et al., 2009; Microsoft Inc, 2008).

Virtual machine consolidation is an implication on energy consumption in cloud data centers. The virtual machine placement (VMP) problem is optimization problem which places the virtual machine in physical machines in effective way (Bianchini & Rajamony, 2004; Vogels, 2008). Number of VMP approaches proposed in the literature with different objectives (Greenberg et al., 2008). All these proposed VMP approaches aims to maximize the profit and to minimize the operational cost (Xiao et al., 2012). In addition to VMP load balancing approaches improves the efficacy of the overall system (Sahu et al., 2013; Amokrane et al., 2013).

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2023): Forthcoming, Available for Pre-Order
Volume 13: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing