Minimization of Energy Using Heuristic Resource Allocation and Migration for Cloud Computing

Minimization of Energy Using Heuristic Resource Allocation and Migration for Cloud Computing

Manjunatha S. (Cambridge Institute of Technology, India) and Suresh L. (Cambridge Institute of Technology, India)
Copyright: © 2021 |Pages: 10
DOI: 10.4018/IJKSS.2021010106
OnDemand PDF Download:
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Data center is a cost-effective infrastructure for storing large volumes of data and hosting large-scale service applications. Cloud computing service providers are rapidly deploying data centers across the world with a huge number of servers and switches. These data centers consume significant amounts of energy, contributing to high operational costs. Thus, optimizing the energy consumption of servers and networks in data centers can reduce operational costs. In a data center, power consumption is mainly due to servers, networking devices, and cooling systems, and an effective energy-saving strategy is to consolidate the computation and communication into a smaller number of servers and network devices and then power off as many unneeded servers and network devices as possible.
Article Preview
Top

1. Introduction

In this paper, we propose several novel methods to reduce the energy consumption of computer systems and networks in data centers, while satisfying Quality of Service (QoS) requirements specified by cloud tenants. In the first place, we ponder vitality effective planning of occasional ongoing assignments on multi-center processors with voltage islands, in which centers are apportioned into different squares. Second, we consider the asset distribution issue for virtual systems in server farms. A cloud inhabitant communicates calculation necessity for each virtual machine (VM) and transmission capacity prerequisite for each pair of VMs. A cloud supplier puts the VMs and courses the traffic among the VMs in a manner that limits the absolute number of servers and switches utilize while giving both calculation and transmission capacity ensures.

Data Centers contain countless servers, interconnected through switches and fast connections. Today, huge organizations, for example, Amazon, Google, Facebook, and Yahoo! routinely use server farms for capacity, web search, and substantial scale calculations (B. A. Milani 2016 ). With the ascent of distributed computing, administration facilitating in server farms has turned into a multi-billion dollar business that assumes an urgent job later on the Information Technology industry. In any case, a considerable scale registering framework devours massive measures of electrical power prompting exceptionally high operational costs that will surpass the expense of the foundation in barely any years. In 2013, U.S. server farms expended an expected 91 billion kilowatt-long stretches of power, equal to the yearly yield of 34 enormous (500-megawatt) coal-terminated power plants. The yearly power utilization of server farms is anticipated to increment to approximately 140 billion kilowatt-hours by 2020, the equivalent yearly yield of 50 power plants, costing American organizations $13 billion every year in power charges and radiating almost 100 million metric massive amounts of carbon contamination every year (S. Ghemawat 2003). In a server farm, control utilization is essential because of servers, organizing gadgets, and cooling frameworks. There are two fundamental methodologies for decreasing the vitality utilization of server farms: (a) Closing down gadgets or (b) downsizing execution. The previous, ordinarily alluded as Dynamic Power Management (DPM), brings about the best investment funds since the routine outstanding task at hand regularly stays beneath 30% of its ability in distributed computing frameworks. The last relates to Dynamic Voltage and Frequency Scaling (DVFS) innovation, which can alter the exhibition of the equipment and power utilization to coordinate the relating attributes of the outstanding task at hand.

Virtualization speaks to a key innovation for the productive activity of cloud server farms. Server farm assets are regularly underutilized since the normal burden is about 30% of its ability (K. Shvachko 2010). Vitality utilization in virtualized server farms can be decreased by proper choice on which physical server a virtual machine (VM) ought to be put. Virtual machine solidification systems attempt to utilize the least conceivable number of physical machines to have a specific number of virtual machines. Agreeing to Open Compute project report (R.S. Chang 2008), 93% of the energy consumption in a data center depends upon the efficient utilization of computing resources at data centers.

In this paper, we propose several novel methods to reduce the energy consumption of computer systems and networks in data centers, while satisfying Quality of Service (QoS) requirements specified by cloud tenants. Since average server utilization in data centers is only 20%-30%, one method to improve the utilization of resources and reduce energy consumption is to dynamically consolidate Virtual Machines (VMs) into a smaller number of physical machines using the virtualization technology. Virtualization partitions available resources and share them among different tenants. Server virtualization permits cloud suppliers to make numerous VM occurrences on a solitary physical server, along these lines improving the use of servers. Server virtualization likewise permits VMs to move between servers to solidify remaining burdens and diminish the quantity of dynamic servers in server farms. System virtualization goes for making different virtual systems to improve organize usage. With such virtualization, the assets can be planned with fine-granularity. The vitality utilization can be diminished by controlling off inert serves and switches, consequently killing the spillage control utilization.

Complete Article List

Search this Journal:
Reset
Volume 13: 4 Issues (2022): Forthcoming, Available for Pre-Order
Volume 12: 4 Issues (2021): 3 Released, 1 Forthcoming
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing