Towards a New Data Replication Management in Cloud Systems

Towards a New Data Replication Management in Cloud Systems

Abdenour Lazeb (Université Oran1, Ahmed Ben Bella, Oran, Algeria), Riad Mokadem (Institut de Recherche en Informatique de Toulouse (IRIT), Paul Sabatier University, Toulouse, France) and Ghalem Belalem (Université Oran1, Ahmed Ben Bella, Oran, Algeria)
DOI: 10.4018/IJSITA.2019040101
OnDemand PDF Download:
No Current Special Offers


Applications produce huge volumes of data that are distributed on remote and heterogeneous sites. This generates problems related to access and sharing data. As a result, managing data in large-scale environments is a real challenge. In this context, large-scale data management systems often use data replication, a well-known technique that treats generated problems by storing multiple copies of data, called replicas, across multiple nodes. Most of the replication strategies in these environments are difficult to adapt to cloud environments. They aim to achieve the best performance of the system without meeting the important objectives of the cloud provider. This article proposes a new dynamic replication strategy. The proposed algorithm significantly improves provider gain without neglecting customer satisfaction.
Article Preview

1. Introduction

Cloud Computing is utilized to portray a modern lesson of organizing based computing that takes put over the Web.

Services are offered on-demand, that are continuously on, anytime and anyplaceOn the other hand, the tenants are billed for utilizing resources through the 'pay as you go' model. The equipment and software services are accessible to common public, undertakings, organizations and businesses markets But, what commitments does the cloud provider that you have chosen? How long will it take to restart your solution in case of a problem? Can he lose your data? These are classic questions that we regularly asked when we talk about cloud computing the answer is in the SLA established between a cloud provider and its tenants, i.e., consumers (Zhao et al., 2015). SLA includes the service level objectives (SLO) of the tenant, for example, availability and performance, which must be met by the provider (Limam et al., 2019).

In order to satisfy the SLA, data replication is a well known technique that consists in storing multiple copies of data, called replicas, at multiple nodes. In this context, data replication strategies in cloud systems address classic problems such as: (i) which data to replicate? (ii) when to replicate these data? (iii) Where to replicate these data but also to specific issues of the Cloud environment such as (iv) Determine the number of necessary replicas such as the objectives of the tenant will be satisfied while ensuring a profit for the cloud provider (Mokadem and Hameurlain, 2020)..

Some solutions can be brought to this problem:

  • The proposal of a cost model allowing replication only if it is necessary.

  • Effective placement of data replicas (Djebbar & Belalem, 2012).

  • An elastic management of the number of replicas.

  • The proposition of an economic model for the cloud provider, such as information replication is advantageous. It is conditioned by a minimization of the punishments paid by the provider which makes it possible to extend its economic profit (Belalem et al., 2011).

To guarantee failure tolerance, a capacity advertising copies data among different copies. These copies store the same set of information, so in case any of copies is lost, information may still be gotten and recouped from the other replicas.

In this paper, we propose an algorithm that mixes all these solutions for good replication management. As result, the main contribution is to improve provider gain over a wide range of cloud and SLA-conditions without neglecting customer satisfaction.

This paper is organized as follows: Section 2 cites Related work. Section 3 explains our strategy aspects. Positioning our strategy is presented in Section 4. The last section contains the conclusions and future work.


(Xie et al., 2017) set three threshold parameters for dataset conditions among datasets, get to frequencies of datasets, and the storage capacity of information centers. Dataset reliance among datasets and get to recurrence for each dataset are calculated as limitations of the dataset. They utilize the limit esteem of capacity space to restrain information replication to maintain a strategic distance from flood issues and guarantee full errand completion in the corresponding area. They moreover classify information sorts into three categories, settled dataset, free-flexible dataset and constrained-flexible dataset, to develop a mapping between datasets and each information center. By receiving their methodology, they endeavor to assist diminish information development and information exchange cost. Their work I find it a little expensive compared to ours and does not treat current state each time.

Complete Article List

Search this Journal:
Open Access Articles: Forthcoming
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing