Receiver Initiated Deadline Aware Load Balancing Strategy (RDLBS) for Cloud Environment

Receiver Initiated Deadline Aware Load Balancing Strategy (RDLBS) for Cloud Environment

Raza Abbas Haidri (School of Computer and Systems Sciences, Jawaharlal Nehru University, New Delhi, India), Chittaranjan Padmanabh Katti (School of Computer and Systems Sciences, Jawaharlal Nehru University, New Delhi, India) and Prem Chandra Saxena (School of Computer and Systems Sciences, Jawaharlal Nehru University, New Delhi, India)
Copyright: © 2017 |Pages: 21
DOI: 10.4018/IJAEC.2017070103
OnDemand PDF Download:
No Current Special Offers


The emerging cloud computing technology is the attention of both commercial and academic spheres. Generally, the cost of the faster resource is more than the slower ones, therefore, there is a trade-off between deadline and cost. In this paper, the authors propose a receiver initiated deadline aware load balancing strategy (RDLBS) which tries to meet the deadline of the requests and optimizes the rate of revenue. RDLBS balances the load among the virtual machines (VMs) by migrating the request from the overloaded VMs to underloaded VMs. Turnaround time is also computed for the performance evaluation. The experiments are conducted by using CloudSim simulator and results are compared with existing state of art algorithms with similar objectives.
Article Preview

1. Introduction

Some decades earlier small and medium enterprises were unable to perform high performance computing (HPC) because of the huge upfront cost of supercomputers. The advent of cloud computing, however, has reduced the cost of HPC. Grid computing is built up for highly computational intensive scientific applications, but cloud computing goes one step ahead by providing dynamic resource provisioning and resource sharing through virtualization (Foster et al., 2001). Cloud computing provides cost-efficient server based computing (Yelick et al., 2011). The model is known as “pay as you go model” i.e. the customers can take virtual resources on rent and pay for what they really consume (Haidri et al., 2014; Landis et al., 2013; Sosinsky, 2010). Based on the abstraction level of the services (Haidri et al., 2014; Buyya et al., 2008; Badger et al, 2011), cloud delivery models can be classified into three types: 1) Infrastructure as a Service (IaaS) where users can use the services in the form of hardware platform so that they can deploy their VMs which support their applications, 2) Platform as a Service (PaaS), a software platform already installed in an infrastructure for hosting applications to help the users to build their applications (Wickremasinghe et al., 2010) and 3) Software as a Service (SaaS) being the last level where real application is provided to customers. There are four kinds of cloud deployment model (Sosinsky, 2010; Badger et al., 2011). 1) Public cloud: located on the premises of the cloud provider, devoted to a particular organization, 2) Community cloud: which provides services to organizations having common functions and 3) Hybrid cloud: offering a combination of private, public and community clouds.

But before exploiting the features of the cloud, there are several challenges which need to be resolved (Dillon et al., 2010). These issues are interoperability, legal and compliant, QoS, elasticity, load balancing, security, and data management (Heiser et al., 2008; Wang et al., 2011; Rimal et al., 2009). Load Balancing, among aforementioned challenges, is one of the main concern in cloud computing. The cloud platform has the advantage of being able to be quickly scaled up and down at any moment. This dynamic environment demands a novel load balancing algorithm for customer satisfaction, optimization of the rate of revenue, along with minimization of turnaround time. The challenging issue with the above-mentioned goal is how to dispatch incoming requests to VMs in a heterogeneous cloud environment.

In this work, a receiver initiated deadline aware load balancing strategy (RDLBS) is proposed to balance the load among the VMs by migrations, aiming to meet the deadline and optimizes the rate of revenue. Proposed strategy guarantees the customer satisfaction in the form of deadline meet (DM) and cost of running applications in the form of total gain (TG). Other QoS parameter such as turnaround time (TAT) is computed for the performance evaluation. Receiver initiated denotes that the load balancing process starts when the VM becomes underloaded. The performance of the proposed strategy is measured by comparing with its peers such as Conductance, Max-Min, Min-Min, Longest Job on Fastest Resource (LJFR-SJFR), and Round Robin (RR) algorithms by using CloudSim simulator.

Complete Article List

Search this Journal:
Open Access Articles: Forthcoming
Volume 13: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing