Load Balancing to Increase the Consistency of Replicas in Data Grids

Load Balancing to Increase the Consistency of Replicas in Data Grids

Ghalem Belalem, Naima Belayachi, Radjaa Behidji, Belabbes Yagoubi
Copyright: © 2010 |Pages: 16
DOI: 10.4018/jdst.2010100104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Data grids are current solutions to the needs of large scale systems and provide a set of different geographically distributed resources. Their goal is to offer an important capacity of parallel calculation, ensure a data effective and rapid access, improve the availability, and tolerate the breakdowns. In such systems, however, these advantages are possible only by using the replication technique. The use of this technique raises the problem of maintaining consistency of replicas of the same data set. In order to guarantee replica set reliability, it is necessary to have high coherence. This fact, however, penalizes performance. In this paper, the authors propose studying balancing influence on replica quality. For this reason, a service of hybrid consistency management is developed, which combines the pessimistic and optimistic approaches and is extended by a load balancing service to improve service quality. This service is articulated on a hierarchical model with two levels.
Article Preview
Top

Consistency Management Approaches

The Consistency is a relation which defines the degree of similarity between copies of a distributed entities. In the ideal case, this relation characterizes copies which have identical behaviors. Although in the real cases, even when the copies evolve in a different way, consistency defines the threshold of dissimilarity authorized between these copies. We hope of a consistency protocol which ensures the execution of the operations of users, the mutual consistency of copies in accordance with a behavior defined by a model of coherence. The consistency protocol gives an ideal view as if there is only one user and only one copy of the data in the system. Replica consistency management can be achieved, either synchronously, using the so-called pessimistic algorithms, or asynchronously, deploying optimistic ones (Belalem & Slimani, 2007; Saito & Shapiro, 2005). Fundamental tussles between pessimistic and optimistic approach are those related to scalability and security. The execution of pessimistic consistency assures that any change in one replica is atomically notified to all other replicas. Therefore, there is an inherent guarantee that all replicas will have the same data all the time, making this approach indispensable in the mission of critical and sensitive applications like the distributed banking application. On the other hand, the optimistic approach is employed for applications (large scale systems, mobile environments and system weakly coupled), which evolves rapidly in terms of response time for example. So that we can say that, the pessimistic approach is interested in consistency more than availability, while the optimistic approach supports the availability more than the consistency (Belalem & Slimani, 2007; Saito & Shapiro, 2005).

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 2 Issues (2023)
Volume 13: 8 Issues (2022)
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing