An Adaptive Overload Detection Policy Based on the Estimator Sn in Cloud Environment

An Adaptive Overload Detection Policy Based on the Estimator Sn in Cloud Environment

Minu Bala (Department of CS & IT, University of Jammu, Jammu, India) and Devanand Padha (Department of CS & IT, Central University of Jammu, Jammu, India)
DOI: 10.4018/IJSSMET.2017070106


Efficient use of cloud resources and providing QoS to its clients is quite challenging for cloud service providers. On one hand, deployment of excessive active resources leads to increase in operational cost and on the other hand, shortage of resources may affect the QoS and SLA violations. In order to optimize the resource utilization of datacenter keeping SLA intact, the issues like over-loaded and under-loaded servers in a cloud datacenter are very important to deal with. Virtual machine migration technique is quite effective in handling such issues. The present work focuses on the adaptive threshold based overload detection policy which uses the robust estimator Sn for statistically analyzing the historical CPU usage of hosts, periodically and accordingly adjusts the upper CPU utilization threshold. The results obtained from proposed policy are compared with Median Absolute Deviation policy for overload detection and it has been found that energy performance efficiency of proposed policy is better than the median absolute deviation policy.
Article Preview

1. Introduction

Cloud computing has emerged as a new computing model with a special feature like dynamic provisioning of resources (infrastructure, platform, software, etc.) as services in a pay-as-you use manner. Many big business giants of different industries are moving towards clouds because of their remarkable advantages. A cloud can be viewed as a big platform for information processing and many cloud service providers are also providing services like accounting, SecaaS (security as a service) on the cloud (Mizuno & Odake, 2015; Mizuno & Odake, 2016, Tiwari & Joshi 2015). Even Big Data solutions are also being provided by various cloud service providers on cloud and Hadoop is one of those, who is providing robust analytics platform for Big Data (Wahi et al., 2015). In the past few years, there is an enormous growth in cloud datacenters due to increase in demand of high performance computing which has further led to increase in global warming and Green House Gas emissions and thus has an adverse impact on the environment.

The resources in cloud datacenters are always over provisioned in order to meet highly dynamic workload patterns. The continuous utilization of all these provisioned computing resources, even in slag hours leads to wastage of resources and increase in operational cost. The power consumption of an idle server is about 70% of its peak usage (Mastroianni et al., 2013) and thus, it is highly inefficient to have under-utilized servers in the cloud data center. Moreover over-utilized or over-heated servers which are running in their full capacity in a datacenter may sometimes result in performance degradation. The Virtualization is the key feature of cloud computing technology that helps in optimizing the utilization of cloud datacenter resources. A Cloud datacenter consists of number of physical machines called hosts each equipped with virtualization mechanisms and are monitored continuously to check the health of hosts or VMs on the basis of various parameters and the most important is the CPU Utilization. Over-utilized and under-utilized hosts in the datacenter can be termed as unhealthy hosts. Once the unhealthy hosts are detected, VM migration technique can be implemented to deal with unhealthy hosts (Beloglazov and Buyya 2013). In VM migration process, VM is migrated from one physical machine to the desired physical machine and is done in cloud datacenters for different reasons like to increase the performance of resources, for load balancing of over-utilized and under-utilized servers, to optimize the energy/power usage of the cloud datacenter, to manage the resources efficiently and smartly etc. Fault tolerant capability of a cloud is also quite important aspect for consideration in order to achieve dependability and robustness of an environment and can be achieved by migration process (Patra et al., 2016). The VM migration process can be divided into three broad categories: Detection of overloaded servers, Selection of VMs to be migrated and Placement of VM to a suitable server. Thus, migration process helps in server consolidation in cloud datacenter but it sometimes leads to performance degradation due to migration overhead. Moreover, if the resource requirements of the clients are not satisfied, problems like time-outs, failures, increased response time, etc., can be faced by the services or applications running on the cloud. There is a commutation between the energy usage of the datacenter and QoS provided by the cloud service provider. Keeping in view the above scenario, the focus of researchers has been shifted from purely performance optimization to the energy-performance optimization of cloud datacenters. There is requirement of devising new innovative dynamic workload handling policies, to balance load in order to decrease the power usage as well as keeping the SLAs and customers’ interests intact.

Complete Article List

Search this Journal:
Open Access Articles: Forthcoming
Volume 10: 4 Issues (2019): Forthcoming, Available for Pre-Order
Volume 9: 4 Issues (2018): 2 Released, 2 Forthcoming
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing