Fairly Sharing the Network for Multitier Applications in Clouds

Fairly Sharing the Network for Multitier Applications in Clouds

Xiaolin Xu (Huazhong University of Science and Technology, Wuhan, China), Song Wu (Huazhong University of Science and Technology, Wuhan, China), Hai Jin (Key Laboratory of Services Computing Technology and System, School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China) and Chuxiong Yan (Huazhong University of Science and Technology, Wuhan, China)
Copyright: © 2015 |Pages: 19
DOI: 10.4018/IJWSR.2015100103
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

A significant trend caused by cloud computing is to aggregate applications for sharing resources. Thus, it is necessary to provide fair resources and performance among applications, especially for the network, which is provided in the best-effort manner in current clouds. Although many studies have made efforts for provisioning fair bandwidth, they are not sufficient for network fairness. In fact, for interactive applications, response time is more sensitive than bandwidth, and users expect a fair response time not just bandwidth. In this study, the authors want to investigate whether the traditional methods of sharing bandwidth can help the fairness of response time. They show that: (1) bandwidth has little relationship to response time, and adjusting bandwidth hardly affects response time in most cases. Thus, the traditional methods cannot help the fairness of response time much; and (2) the fairness between components is different from the one between transactions, and many prior studies only consider the former while ignoring the latter. Thus, the authors cannot help much for multitier applications consisting of multiple transactions either. As a result, they construct a model with two metrics to evaluate the fairness status of the network sharing, while considering the applications' characteristics on both the response time and throughput. Based on the model, they also propose a mechanism to improve the fairness status. The evaluation results show that the authors' mechanism improves the fairness status by 26.5%–52.8%, and avoids performance degradation compared to some practical mechanisms.
Article Preview

Introduction

In cloud environment, many applications are aggregated to share resources. Meanwhile, sharing leads to potential resource contentions, such as CPU contention (Li et al., 2012; Rao, Wei, Gong, & Xu, 2013), I/O contention (Rao et al. 2013), and network contention (Bourguiba, Haddadou, El Korbi, & Pujolle, 2014; Singh, Shenoy, Natu, Sadaphal, & Vin, 2011). Owing to the best-effort provisioning manner of cloud networks, the network contention may cause significant unfair sharing of network resources among applications. In fact, some prior studies have made efforts to guarantee the fairness of network bandwidth (Guo, Liu, Zeng, Lui, & Jin, 2013; Popa et al., 2012; Wei, Vasilakos, Zheng, & Xiong, 2010), providing bandwidth fairly to applications. However, it is not sufficient to share networks fairly by considering the bandwidth only.

Many interactive applications in clouds concern not only the bandwidth, but also the response time, which would be experienced directly by end-users. The owners of such applications expect fair response time, rather than the bandwidth. In fact, they may consider it unfair when their applications express bad response time, even if the applications are guaranteed with fair bandwidth. Thus, we should guarantee the fair response time as well. Naturally, we investigate whether the traditional methods (Guo et al., 2013; Popa et al. 2012; Shieh, Kandula, Greenberg, Kim, & Saha, 2011) can share the network fairly for multitier applications, considering not only the bandwidth but also the response time.

First, can the methods for sharing bandwidth help the response time? Unfortunately, these works do not help the fairness of response time much. Intuitively, they consider an application with more bandwidth obtaining better performance. As shown in Figure 1, assuming application A pays 4 times as much as B, A gets 8Mbps bandwidth of the link and B gets 2Mbps. However, according to the queuing theory, packets of A going through this link would cost the same time as packets of B, even though A gets more bandwidth (under FIFO scheduling policy). In fact, the response time is mainly affected by the forwarding ability, an inherent attribute of switches that is unchangeable, and the link load, the aggregated load transferring over that link. Although the link load can be changed by adjusting bandwidth, and the change may potentially affect response time, we will show that such effect is non-significant or even unwanted.

Figure 1.

Two sample transactions sharing a switch link

Second, can the methods toward components help multitier applications? Unfortunately, the traditional methods that focus on the fairness toward components cannot help multitier applications either. In practice, on one hand multitier applications contain multiple transactions, and on the other hand, the cloud contains many multitier applications. This causes a lot of transactions with different degrees of importance mixed together. Only considering the components ignores the requirements of transactions, and causes unfair performance for applications.

In this paper, we focus on the fairness of network sharing for multitier applications. We first analyze potential problems if considering requirements of response time and transactions. Then we construct a model to describe the fairness of network sharing, considering both response time and throughput. Two metrics in this model evaluate the fairness status of the cloud, indicating the fairness degree and the average performance, respectively. Based on the model, we propose a scheduling mechanism to improve the fairness status and reduce the performance degradation. The evaluation results show that, our method can improve fairness by 26.5%–52.8%, compared to some practical mechanisms, and avoid performance degradation or even improve performance.

In summary, we make the following contributions in this paper:

Complete Article List

Search this Journal:
Reset
Open Access Articles
Volume 14: 4 Issues (2017)
Volume 13: 4 Issues (2016)
Volume 12: 4 Issues (2015)
Volume 11: 4 Issues (2014)
Volume 10: 4 Issues (2013)
Volume 9: 4 Issues (2012)
Volume 8: 4 Issues (2011)
Volume 7: 4 Issues (2010)
Volume 6: 4 Issues (2009)
Volume 5: 4 Issues (2008)
Volume 4: 4 Issues (2007)
Volume 3: 4 Issues (2006)
Volume 2: 4 Issues (2005)
Volume 1: 4 Issues (2004)
View Complete Journal Contents Listing