Performance Analysis of Cloud Systems with Load Dependent Virtual Machine Activation and Sleep Modes

Performance Analysis of Cloud Systems with Load Dependent Virtual Machine Activation and Sleep Modes

Sudhansu Shekhar Patra, Veena Goswami
Copyright: © 2018 |Pages: 20
DOI: 10.4018/IJAIE.2018070101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Due to the advancements in virtualization technology, it is now an up and coming field and has become a more appealing area of internet technology. Since there is a rapid growth for the demand of computational power increases by scientific, business, and web-applications, it leads to the creation of large-scale data centers. These data centers consume enormous amounts of electrical power. In this article, the authors study energy saving methods by consolidation and by switching off those virtual machines which are not in use. According to this policy, c virtual machines continue serving the customer until the number of idle server attains the threshold level d; then d idle servers take synchronous vacation simultaneously, otherwise these servers would begin serving the customers. Numerical results are provided to demonstrate the applicability of the proposed model for data center management in particular, to quantify the tradeoff theoretically between the conflicting aims of energy efficiency and QoS.
Article Preview
Top

1. Introduction

The advancement of virtualization technology made the new emerging field Cloud Computing (Durao et al.,2014; Buyya et al., 2009; Rimal et al., 2009) as an egressing computation epitome that offers unlimited number of testing and staging servers which can be dynamically provisioned on the basis of pay-per-use (Tsai et al., 2013). The emerging computing assures to provide on-demand, elastic and compromising IT services, which leaves the traditional programming models far behind and adopts the new ones. The team of agile development seamlessly combines the many developments, production and testing environments with the other services of the cloud using the Cloud computing and virtualization technology. This emerging service epitome helps the users and eradicate the burden of creating and managing the composite infrastructure. Rather than using the traditional own-and-use pattern, the consumers are now diverting towards this new computing paradigm in which the computing paradigm is developed as a utility computing which offers a pool of computing resources in a pay-per-use manner pay-as-you-go basis (Sivathanu et al., 2010). Virtualization offers more infrastructure services rather than platform and application services. But Cloud computing providers offer supply service resources based on several fundamental models, including infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS) (Armbrust et al., 2009; Begnum, 2012). For example, Amazon Elastic compute cloud (EC2), Google's App Engine, Amazon S3, Salesforce, etc. are all existing business models to provide computing infrastructure, programming platforms, data storage and software applications as services respectively. Cloud computing has gained enormous popularity in both business and scientific communities because of its cost effectiveness, reliability, and scalability (Buyya et al., 2009; Armbrust et al., 2009). For the procurement of any new infrastructure, the users do not require further investment. The users can get the required demanded services just by payment and without worrying about the complexity of the IT infrastructure. The potential of cloud computing can only be realized if the cloud service providers can flexible in their service delivery to meet the various customer's requirements by keeping the consumers isolated from the underlying infrastructure. Recently, every service provider has given their attention towards high-performance computing in data centre deployments but without paying much attention to energy consumption, these demands have been fulfilled. However, the energy consumption by a data centre in an average is as much energy as 25,000 households (Kaplan et al., 2008; Sofia et al., 2015). As energy costs are increasing day by day while availability dwindles, there is an urgent need for optimizing the energy efficiency of data centers without violating performance. There is a chance that the cloud service providers profit margin may dramatically reduce due to high energy costs. The risk of rising energy cost may increase the Total Cost of Ownership (TCO) and reduce the Return on Investment (ROI) of Cloud infrastructures. Therefore, the service provider needs to adopt measures to ensure their profit margins. The major power consumed by data centres are by the physical machines during their computation and air-conditioning for cooling the servers which are almost the same as that used by the server itself. The researchers and the industries are using the energy consumption index to measure the energy efficiency of Cloud data centres. This index is used to measure the percentage of energy consumed by the data centre computing devices out of all of the energy consumed by the data centre. Here the energy consumption by the data centres includes the energy consumption of computing devices, as well as the energy used for heating, temperature control, ventilation, lighting and much more. Most of the power consumption in data centres is caused by computational processing, disk storage, networking, and cooling systems, and it is given as

IJAIE.2018070101.m01
where IJAIE.2018070101.m02, IJAIE.2018070101.m03, IJAIE.2018070101.m04 and IJAIE.2018070101.m05 refer to the data centre total energy consumption, the total physical servers energy consumption, the cooling system energy consumption, and the other energy consumption, respectively. Worldwide there is a raising pressure from the government agencies to reduce the carbon footprints which have a substantial effect on the climate change. The Japan Data Centre Council established by the Japanese government addresses the gliding energy consumption of data centres (Ministry, 2010). The traditional data centers are competing to find ways to increase the data centre resource efficiency and reduce energy consumption. Recently, various service providers have formed The Green Grid (green grid, 2011), a global consortium to minimize the environmental impact and to promote energy efficiency for data centres. Thus, providers necessitated minimizing energy consumption of cloud infrastructures, while ensuring the service delivery.

Complete Article List

Search this Journal:
Reset
Volume 10: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 9: 1 Issue (2023)
Volume 8: 1 Issue (2021)
Volume 7: 1 Issue (2020)
Volume 6: 2 Issues (2019)
Volume 5: 2 Issues (2018)
Volume 4: 2 Issues (2017)
Volume 3: 2 Issues (2016)
Volume 2: 2 Issues (2014)
Volume 1: 2 Issues (2012)
View Complete Journal Contents Listing