An Efficient Multi-Objective Model for Data Replication in Cloud Computing Environment

An Efficient Multi-Objective Model for Data Replication in Cloud Computing Environment

K. Sasikumar, B. Vijayakumar
Copyright: © 2020 |Pages: 23
DOI: 10.4018/IJEIS.2020010104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The main aim of the proposed methodology is to design a multi-objective function for replica management system using oppositional gravitational search algorithm (OGSA), in which we analyze the various factors influencing replication decisions such as mean service time, mean file availability, energy consumption, load variance, and mean access latency. The OGSA algorithm is hybridization of oppositional-based learning (OBL) and gravitational search algorithm (GSA), which is change existing solution, and to adopt a new good solution based on objective function. Here, firstly we create a set of files and data node to generate a population by assigning the file to data node randomly and evaluate the fitness which is minimizing the objective function. Secondly, we regenerate the population to produce optimal or suboptimal population using OGSA. The experimental results show that the performance of the proposed methods is better than the other methods of data replication problem.
Article Preview
Top

1. Introduction

The main aim for the clients putting away the information in the cloud is to protect the information and recoup it at whatever point required. Any failure in server ought not to bring about the loss in data. Cloud applications incorporate gaming, voice, and video conferencing, online office, stockpiling, reinforcement, social networking. These applications' execution depends generally on the accessibility of superior correspondence assets and system productivity (Sasikumar & Madiajagan 2016; Kliazovich et al., 2016). Data replication is a usually utilized strategy to form the information accessibility. It requires a high transfer speed information throughput path. Cloud recreates the elements and preserves them deliberately on different servers situated at different geographic areas. Replication is making different duplicates of a current element (Haider & Nazir, 2016). A few approaches in distributed systems are occupied with improving the dependability and also the accessibility. The replication method is one of the techniques in Cloud registering, which gives a various copy of a specific service of the client on different hubs for chopping down the client period of waiting and additionally the bandwidth utilization in the cloud framework and furthermore to raise the accessibility of data (Nguyen et al. 2016). For instance, data stockpiling frameworks such as Amazon S3 (Amazon, 2018), Google File System (Borthakur, 2007) and Hadoop Distributed File System (Ghemawat et al., 2003) all receive a 3-copies data replication methodology automatically, i.e. store 3 data duplicates at one time, for the motivations behind data reliable quality. There are two sorts of replications of data and they are the dynamic replication algorithm (Gopinath & Sherly, 2017) and the static replication algorithm (Bhuvaneswari & Ravi, 2018).

Replication is utilized to advance framework accessibility (by guiding traffic to a reproduction after a failure), prevent loss of data (by retrieving lost information from a copy), and enhance execution (by spreading the load over various copies and by making the low-latency availability of access to clients around the globe). And there are various ways to deal with replication. Synchronous replication guarantees all duplicates are cutting-edge, yet possibly brings about high inactivity on updates. Additionally, accessibility might be affected if synchronously recreated updates can't fulfill while a few copies are offline. Asynchronous replication prohibits high write latency (in demanding, influencing it to fit for wide range of replications) however allows reproductions to be stale. Besides, loss of data may occur if an update is lost because of breakdown before duplication can be performed (Sann & Soe, 2017).

High accessibility, high adaptation to internal failure and high proficiency access to cloud data centers where failures are ordinary as opposed to exceptional are critical issues, because of the substantial scale data support. Replication of data permits diminishing client waiting time, accelerating access of data by expanding data accessibility by giving the client diverse copies of a similar service, every one of them with the consistent state (Yadav et al, 2016).

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 19: 1 Issue (2023)
Volume 18: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing