Dynamic Data Replication Based on Tasks scheduling for Cloud Computing Environment

Dynamic Data Replication Based on Tasks scheduling for Cloud Computing Environment

Siham Kouidri, Belabbas Yagoubi
DOI: 10.4018/IJSITA.2017100104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Cloud computing provides IT resources (e.g., CPU, memory, network, storage, etc.) based on virtualization concepts and a pay-as-you-go principle. It comprises an accumulation of inter-related plus virtualized calculating resources which are managed by one or more amalgamated calculating resources. With the development of a computerized scientific workflow, the amount of data is increasing exponentially. Workflow scheduling and data replication have been considered the major challenges in cloud computing. Nevertheless, many researchers focus on scheduling or data replication separately. In this article, a combination of workflow scheduling based on the clustering of data and dynamic data replication strategies, has been introduced together and evaluates several performance metrics using a Cloudsim simulator. The aim of this proposed algorithm is to minimize the completion time and transfer time. The performance of this proposed algorithm has been evaluated using the CloudSim toolkit.
Article Preview
Top

To cover related literature, this section is divided into two groups. The first group discusses schedule strategies, the second group of related work discusses The replication strategies. Scheduling of tasks is considered a critical issue in the Cloud computing environment, A comparative study of task scheduling algorithms on the Cloud computing environment has been done in (Vignesh, Kumar & Jaisankar, 2013):

  • Round Robin: It is the simplest algorithm that uses the concept of time quantum or slices. Here the time is divided into multiple slices and each node is given a particular time quantum or time interval and in this quantum, the node will perform its operations.

  • Preemptive Priority: Priority of jobs is an important issue in scheduling because some jobs should be serviced earlier than other those jobs cannot stay for a long time in a system. A suitable job scheduling algorithm must be considered priority of jobs.

  • The Shortest Job First (SJF): An SJF algorithm is simply a priority algorithm where the priority is the inverse of the next CPU burst. That is, the longer the CPU burst, the lower the priority and vice versa.

Yuan and all (Yuan, Yang, Liu, & Chen, 2010) in their paper propose a clustering strategy matrix k-means based on data placement for scientific applications in the cloud, this paper contains two strategy (Build–Time stage and Runtime stage), that group the existing datasets in k data centers during the workflow in the first stage and dynamically clusters newly generated datasets to the most appropriate data centers in the second stage.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing