Load Balancing to Increase the Consistency of Replicas in Data Grids

Load Balancing to Increase the Consistency of Replicas in Data Grids

Ghalem Belalem, Naima Belayachi, Radjaa Behidji, Belabbes Yagoubi
DOI: 10.4018/978-1-4666-0906-8.ch003
(Individual Chapters)
No Current Special Offers


Data grids are current solutions to the needs of large scale systems and provide a set of different geographically distributed resources. Their goal is to offer an important capacity of parallel calculation, ensure a data effective and rapid access, improve the availability, and tolerate the breakdowns. In such systems, however, these advantages are possible only by using the replication technique. The use of this technique raises the problem of maintaining consistency of replicas of the same data set. In order to guarantee replica set reliability, it is necessary to have high coherence. This fact, however, penalizes performance. In this paper, the authors propose studying balancing influence on replica quality. For this reason, a service of hybrid consistency management is developed, which combines the pessimistic and optimistic approaches and is extended by a load balancing service to improve service quality. This service is articulated on a hierarchical model with two levels.
Chapter Preview

Consistency Management Approaches

The Consistency is a relation which defines the degree of similarity between copies of a distributed entities. In the ideal case, this relation characterizes copies which have identical behaviors. Although in the real cases, even when the copies evolve in a different way, consistency defines the threshold of dissimilarity authorized between these copies. We hope of a consistency protocol which ensures the execution of the operations of users, the mutual consistency of copies in accordance with a behavior defined by a model of coherence. The consistency protocol gives an ideal view as if there is only one user and only one copy of the data in the system. Replica consistency management can be achieved, either synchronously, using the so-called pessimistic algorithms, or asynchronously, deploying optimistic ones (Belalem & Slimani, 2007; Saito & Shapiro, 2005). Fundamental tussles between pessimistic and optimistic approach are those related to scalability and security. The execution of pessimistic consistency assures that any change in one replica is atomically notified to all other replicas. Therefore, there is an inherent guarantee that all replicas will have the same data all the time, making this approach indispensable in the mission of critical and sensitive applications like the distributed banking application. On the other hand, the optimistic approach is employed for applications (large scale systems, mobile environments and system weakly coupled), which evolves rapidly in terms of response time for example. So that we can say that, the pessimistic approach is interested in consistency more than availability, while the optimistic approach supports the availability more than the consistency (Belalem & Slimani, 2007; Saito & Shapiro, 2005).

Complete Chapter List

Search this Book: