Cloud Computing provides on-demand access to a shared pool of configurable computing resources. The major issue lies in managing extremely large agile data centers which are generally over provisioned to handle unexpected workload surges. This paper focuses on green computing by introducing Power-Aware Meta Scheduler, which provides right fit infrastructure for launching virtual machines onto host. The major challenge of the scheduler is to make a wise decision in transitioning state of the processor cores by exploiting various power saving states inherent in the recent microprocessor technology. This is done by dynamically predicting the utilization of the cloud data center. The authors have extended existing cloudsim toolkit to model power aware resource provisioning, which includes generation of dynamic workload patterns, workload prediction and adaptive provisioning, dynamic lifecycle management of random workload, and implementation of power aware allocation policies and chip aware VM scheduler. The experimental results show that the appropriate usage of different power saving states guarantees significant energy conservation in handling stochastic nature of workload without compromising the performance, both when the data center is in low as well as moderate utilization.
TopIntroduction
With the advent of cloud computing, large scale data centers are becoming common in the computing industry. However, these data centers equipped with high performance infrastructures consume huge power causing global warming by emitting CO2 footprint, giving a serious environmental threat to today’s world. One of the major causes for energy inefficiency in the data center is the idle power wasted when servers run at low average utilization. Even at 10% of CPU utilization, the power consumed is over 50% of the peak power (Neugebaur & McAuley, 2001; Ragavendra et al., 2008). This results in more power consumption per workload during off-peak load. Pinheiro and Rajamony quoted that 22% of energy consumption of a single server is needed to cool it. A study on data center issues also shows that energy consumption of data centers worldwide doubled between 2000 and 2006 (Pinheiro et al., 2001; Elnozahy, 2003). Incremental US demand for data center energy between 2008 and 2010 is the equivalent of 10 nuclear power plants (Kaplan et al., 2008). To handle this issue, data centers perform consolidation of various workloads onto a set of common servers with the help of live state migration facility which is enabled through virtualization technology (James & Ravi, 2005).
In a cloud environment, the power and energy management strategies need to consider the characteristics of servers and incoming workloads. Modern microprocessor technology supports the processing elements (PE) such as core, chip and host to be set in different sleep states depending on the demand (Lee et al., 2007). The sleep states are also referred to as power saving states. A PE conserves different amount of power at different sleep states. The power drawn during wake up time is comparatively insignificant. The shallow sleep states realize lower power conservation with lower wake up latency and the deep sleep states realize higher power conservation with higher wake up latency. IBM’s Power family machines support nap and sleep modes (Sinharoy et al., 2005; Kim et al., 2011; Cardosa et al., 2009). The nap is a low-power state designed for short processor idle periods (Malcolm et al., 2010). It provides modest power reduction over a software idle loop, and the wake-up latency of the nap state is less than 5μs. Instruction execution begins immediately upon wakeup. The second idle mode is sleep state. It is a lower-power, higher latency standby state intended for processing cores that will be unused for an extended period of time.
To conserve power significantly, we have to provision the resources in such a way that the required computing resources are well utilized and the idle resources are kept in appropriate power saving states. In the proposed work as we consider IaaS cloud wherein the term workload refers to custom configuration of Virtual Machine (VM) to be launched and so the terms workload and VM request are used interchangeably. The study on workload characteristics in a typical data center reveals that there are wide variations in the number of workloads arriving to a data center as well as the amount of resources required by each workload in a particular instance of time and it concludes that incoming VM requests are highly dynamic in nature. Hence, by dynamically predicting the arrival pattern of the workloads based on recent utilization history of data center, the required number of processing cores for the incoming VM requests can be provisioned for immediate allocation. The remaining resources that are not provisioned can be transitioned to suitable power saving states, and thus idle powers of processor cores are not wasted.
Hence, our objective is to build Power Aware Meta scheduler (PAMS) which finds right fit infrastructure for launching VMs and conserves power by realizing the internal power saving without compromising the performance. The major challenge lies in efficiently handling the stochastic nature of incoming workloads by adaptive resource provisioning, while realizing energy conservation. The PAMS saves power efficiently at the core level as well as at the chip level by using various static and dynamic consolidation policies during VM placement and VM migration. As the cloud environment is very large, even 1% of energy saving in each node contributes to significant amount of power conservation. Hence, the power aware provisioning policy is of strong relevance in green computing.