Article Preview
TopIntroduction
Cloud computing offers different services and resources that respond to user requests and facilitates treatments. These infrastructures are designed to support the accessibility and availability of various consumer services via the Internet. Recently, the number of companies and institutions migrating their services to cloud providers has rapidly increased (Baciu, Wang, & Li, 2017). To host applications, processing data in clouds; data centers consume a lot of energy; this contributes to a large emission of CO2 gas.
With the rapidly growing number of data centers in cloud computing, energy efficiency and optimization approaches have become increasingly important. In 2008; the estimate of computing resources used 40% of total demand (9,936 × 1016 Joules) for the supply of servers of all kinds. 38% of the total electricity consumption (9.439 × 1016 Joules) was for cooling; the remaining 12% was for energy distribution (Masanet, Brown, Shehabi, Koomey, & Nordman, 2011). More resources mean more energy consumption and thus higher electricity bills. Google consumed 2.68 million megawatt hours of electricity in 2011 (Patra, 2018).
In 2015, the energy consumption of data centers containing multiple infrastructures such as servers, storage systems, routers, air conditioning systems was 4% of global energy consumption. Air conditioning and cooling systems represent 40% to 50% of the data center’s energy consumption. These infrastructures of cooling are needed to reduce the heat released by the servers.
Among the causes of this high energy, it has a large number of Data Center infrastructures for data processing and storage. It also has energy-saving air conditioners to cool the high temperature of the servers and manage the heat flow released to avoid high-temperature failure (Zakarya & Gillam, 2017). Reducing this high energy consumption is essential to lower the rate of CO2 gas, which makes the environment more and more polluted, and it is a negative impact on human health. Therefore, solutions are needed to minimize energy consumption.
Several energy optimization solutions in cloud computing are used such as VM virtualization, migration and job consolidation. Another solution to increase energy efficiency is the scheduling of tasks in the various servers of the data center.
The scheduling algorithms generally aim to distribute the workload on the available machines and to optimize their utilization by minimizing total execution time and reducing power consumption (Zomaya & Teh, 2001). In these algorithms, power consumption due to computing, storage, and physical resources have an impact on the performance of data centers when allocating tasks on servers. To reduce the cost of this heavy processor utilization and the cost of cooling computer systems, the researchers must propose solutions that will not reduce only the utilization of physical resources such as CPU, RAM and server bandwidth but also the temperature generated by this important utilization of the processor.
In a server, the component that consumes the most power is the processor (CPU) followed by memory (RAM) and the PSU efficiency loss (Beloglazov, Buyya, Lee, & Zomaya, 2011a). The energy consumption of these two components increases when the workload becomes important; therefore, an optimal utilization and good scheduling of the tasks are necessary.
This article focuses on dynamic task scheduling by providing approaches based on thresholds that minimize energy consumption in a Cloud data center. This work proposes two new scheduling approaches SchedCT (Scheduler based on CPU utilization and processor temperature thresholds) and SchedCRT (Scheduler based on CPU utilization, RAM utilization, and processor temperature thresholds) to reduce data center energy consumption. The utilization of these thresholds will limit the CPU utilization, RAM utilization, and the processor temperature. Therefore, the main contributions of this paper are summarized as follows:
- •
Proposed and evaluated new scheduling policies based on physical resources to reduce and predict the impact of energy consumed by data centers.
- •
Efficient utilization of physical resources (CPU utilization, RAM capacity, and processor temperature) to adjust over or underused hosts.
- •
Use an adequate VMs allocation algorithm to improve the utilization of resources below thresholds to minimize the number of active PMs as well as the reduction of energy consumption.