A Self-Adaptive Prediction Algorithm for Cloud Workloads

A Self-Adaptive Prediction Algorithm for Cloud Workloads

Li Mao (Research Institute of Computer Systems, South China University of Technology, Guangzhou, China & Department of Computer Science, Guang Dong Police Officer College, Guangzhou, China), Deyu Qi (Research Institute of Computer Systems, South China University of Technology, Guangzhou, China), Weiwei Lin (School of Computer Science and Engineering, South China University of Technology, Guangzhou, China) and Chaoyue Zhu (School of Computer Science and Engineering, South China University of Technology, Guangzhou, China)
Copyright: © 2015 |Pages: 12
DOI: 10.4018/IJGHPC.2015040105
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

It is difficult to analyze the workload in complex cloud computing environments with a single prediction algorithm as each algorithm has its own shortcomings. A self-adaptive prediction algorithm combining the advantages of linear regression (LR) and a BP neural network to predict workloads in clouds is proposed in this paper. The main idea of the self-adaptive prediction algorithm is to choose the better prediction method of the future workload. Some experiments of prediction algorithms are conducted with workloads on the public cloud servers. The experimental results show that the proposed algorithm has a relatively high accuracy on the workload predictions compared with the BP neural network and LR. Furthermore, in order to use the proposed algorithm in a cloud data center, a dynamic scheduling architecture of cloud resources is designed to improve resource utilization and reduce energy consumption.
Article Preview

1. Introduction

With the continuous development of cloud computing, the resource scheduling on cloud has become a hot spot on current researches (Lin & Qi 2012; Beloglazov, Abawajy & Buyya 2012; Lin, Liang, Wang & Buyya 2014; Lin, Liu, Zhu & Qi 2013). In order to achieve the effective scheduling and management of cloud resources, some prediction technologies are used by many researchers (Wu, Cao & Li 2013; Zhang, Wu & Lü 2013; Zhou & Cao 2012; Mi, Wang, Yin & Shi 2011; Huang & Yu 2012; Miskhat, Rahman & Amin 2011; Zhang, Chen & Hu 2012; Islam, Keung, Lee & Liu 2012; Prevost, Nagothu, Kelley & Jamshidi 2011). By analysing the present prediction models, the paper (Wu, Cao, & Li 2013) divides the prediction model into three types: 1) basic prediction model, 2) prediction model based on feedback, 3) and prediction model of multiple time sequences. The core of basic prediction model is self-regression, while the prediction model based on feedback just relies on the feedback of information, and the prediction model of multiple time sequences considers not only the correlation of the same resources but also the cross correlation between different resources. Feifei Zhang et al. (Zhang, Wu & Lü 2013) pointed out some shortcomings of these models, and the error correction and application performance should be improved as well. Meanwhile, there is also a great space to develop new models. Wenjun Zhou and Jian Cao (Zhou & Cao 2012) proposed a scheduling strategy of cloud resources based on the prediction and ant colony optimization which reduces the electricity consumption of data centres. Besides, this strategy has two innovations: 1) Through the prediction algorithm of dynamic trend to predict the changing of load on data center, which helps to manage the power of data center; 2) Improve the ant colony optimization, and apply it into the resource scheduling, and reserve resources through the migration of virtual machines. However, the shortcoming of this strategy is that it predicts the load of next moment only according to the current trend of load without using the historical data of the past and it only predicts the short-term load of future. Haibo Mi, Huaimin Wang and others (Mi, Wang, Yin & Shi 2011) proposed a dynamic configuration method of resources on-demand for web application, which can efficiently reconfigure the cluster online according to the changing requirement of resources and determine the number of current nodes running on cluster and the type of virtual machine in real time. The advantage of this method is that it can effectively avoid that the configuration results is behind the resource requests by using the Boolean quadratic exponential smoothing to predict the user requests. And also, it can quickly find out the reasonable configuration by searching the configuration space in parallel based on the genetic algorithm. In (Huang & Yu 2012), the author proposed a prediction method of cloud resources based on the double exponential smoothing, which assigns different weights to the data in different periods weighing the prediction influence of data in different time. What’s more, it is also very good to reduce the prediction error and provide the great accuracy of prediction by using the double exponential smoothing method. However, this prediction method just can predict the future demand trends of resources in a relatively short period of time but not in a long period of time. In (Miskhat, Rahman & Amin 2011), the author proposed a prediction method of processor load for grid and efficient extension of cloud resources based on neural network and linear regression. The author uses both the delayed neural network and linear regression to predict the processor load, and compares this two prediction methods through experiment showing that the prediction method based on delayed neural network has a higher precision than the linear regression, and is more suitable to predict the processor load. Because the traditional deployment mechanism of virtual machines based on optimal choice ignores the differences of load requirements in different events, and it leads to a lack of effective prediction of load in the target servers which easily results in problems such as load imbalance and excessive migration of virtual machines, Beibei Zhang et al. (Zhang, Chen & Hu 2012) design an improved BP neural network algorithm to predict the load of server nodes, then deploy the virtual machines with weights, thus the virtual machine can be deployed in the appropriate server. And the experimental results show that this strategy has high fitting precision on load prediction based on time sequence, which can improve the stability of the virtual machine deployment. In paper (Islam, Keung, Lee & Liu 2012), the author proposed a strategy based on the prediction, which can measure and supply the resources through the error correction of neural network algorithm (ECNN) and linear regression analysis algorithm in order to meet the future demand of resources. Besides, the author had conducted some experiments on the Amazon EC2 Cloud to see the prediction effect of linear regression and the error correction of neural network algorithm (ECNN) under the case of using sliding window and not using that. John j. Prevost et al. (Prevost, Nagothu, Kelley & Jamshidi 2011) predict the network load in cloud data center through the stochastic and neural network model, mainly using the self-regression linear prediction and neural network prediction to predict the future demand of the load. Additionally, the experimental results show that both of these two methods can effectively predict, while the prediction accuracy of the linear regression is higher, but the prediction accuracy decreases inverse-linearly as the time goes which is the disadvantages of this method.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing