Two-Stage Adaptive Classification Cloud Workload Prediction Based on Neural Networks

Two-Stage Adaptive Classification Cloud Workload Prediction Based on Neural Networks

Lei Li (School of Electronic and Information Engineering, South China University of Technology, Guangzhou, China), Yilin Wang (South China University of Technology, Guangzhou, China), Lianwen Jin (School of Electronic and Information Engineering, South China University of Technology, Guangzhou, China), Xin Zhang (South China University of Technology, Guangzhou, China) and Huiping Qin (South China University of Technology, Guangzhou, China)
Copyright: © 2019 |Pages: 23
DOI: 10.4018/IJGHPC.2019040101

Abstract

Workload prediction is important for automatic scaling of resource management, and a high accuracy of workload prediction can reduce the cost and improve the resource utilization in the cloud. But, the task request is usually random mutation, so it is difficult to achieve more accurate prediction result for single models. Thus, to improve the prediction result, the authors proposed a novel two-stage workload prediction model based on artificial neural networks (ANNs), which is composed of one classification model and two prediction models. On the basis of the first-order gradient feature, the model can categorize the workload into two classes adaptively. Then, it can predict the workload by using the corresponding prediction neural network models according to the classification results. The experiment results demonstrate that the suggested model can achieve more accurate workload prediction compared with other models.
Article Preview
Top

1. Introduction

Owing to its advantage of flexible resource allocation, an increasing number of online services are using cloud computing services for resource sharing and utilization; the use of cloud computing services allows for better cost effectiveness and prevents low resource utilization. In particular, flexible resource scheduling and allocation not only avoids under- or over-capacity scenarios, but also, consequently, leads to higher revenues for cloud services providers. Furthermore, through Service Level Agreements (SLAs), the performance objectives of customers are ensured (Badger, Grance, Patt-Corner, & Voas, 2011; Al-Dhuraibi, Paraiso, Djarallah, & Merle, 2017). However, while using cloud services, workload demands must be fulfilled via highly reactive resource allocation; therefore, providing appropriate and accurate service levels in a proactive manner has proven to be challenging. Accurate workload predictions, however, can address this issue, consequently leading to lower redundancy costs.

Considering this, considerable research has been conducted toward the prediction of workloads for cloud computing environments; these predictions can be divided into two categories or layers, which are discussed below.

The first layer involves workload prediction for tasks, or hosts, such as CPU or memory usage workload. In particular, such resource usage workload predictions directly forecast the tasks’ or hosts’ resource usage requirements. In general, the information regarding resource usage workload is represented by multi-dimensional data, including information regarding CPU, memory, priority, and run-time, among others.

The second layer involves task request workload prediction, such as for online tasks and data or network traffic data predictions.

In a previous study (Ruben, Vanmechelen & Broeckhove, 2015), the authors used the Auto-regressive Moving Average (ARMA) model as well as the Holt–Winters and exponential smoothing techniques to achieve renewal contract policies and load prediction for cloud computing services. Two other studies (Li, Su, Cheng, Song, Ma, & Wang, 2015; Calheiros, Masoumi, Ranjan, & Buyya, 2015) introduced Online Cost-efficient Scheduling (OCS) based on the Auto-regressive Integrated Moving Average (ARIMA) model to minimize rental costs based on user requirements by predicting the workload. Likewise, some machine learning methods (Chen, Kuruoglu, & So, 2015; Gu, Zhu, & Swamy, 2015; Gunturkun, Reilly, Kirubarajan, & DeBruin, 2014; Xu, Yin, Deng, Xiong, & Huang, 2016), especially artificial neural networks (NN), have been used to address the above problem. The authors (Islam, Keung, Lee, & Liu, 2012) applied neural networks to predict CPU usage workload. (Sood & Sandhu, 2015) proposed an adaptive model for efficient resource provision in mobile clouds by predicting and storing resource usages based on back propagation neural networks. (Bala & Chana, 2015) made an effort to focus on the research problem of designing an intelligent task failure prediction model using neural network. (Wang, Wang, Che, Li, Huang, & Gao, 2015) proposed a price formation mechanism, which consisted of a back propagation neural network (BPNN) based on price prediction algorithm and a price matching algorithm in cloud computing. Furthermore, the linear regression (LR) method to predict CPU usage in the case of virtual machines (VMs) was proposed in a 2017 study (Nguyen, Francesco, & Yla-Jaaski, 2017). Moreover, based on a normality test, researchers (Zia, Hassan, & Khan, 2017) proposed multi-models to predict CPU usage workload; in particular, the normality test is used to determine whether to use the ARIMA or artificial neural network (NN) to predict workloads. Similarly, for task request workload prediction, tasks were divided into different categories based on task priorities (Liu, Liu, Shang, Chen, Cheng, & Chen, 2017). Further, Support Vector Regression (SVR) and LR models were employed to predict resource usage workloads for different categories. An advantage of multi-models is that it can fit different types of workload based on suitable prediction models from among multiple prediction models to achieve a higher prediction accuracy compared with a single model.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 12: 4 Issues (2020): 1 Released, 3 Forthcoming
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing