Article Preview
TopIntroduction
Due to the scientists demands on very high computing power and storage capacity (Guerfel et al., 2017), the data grids seem a solution to meet this growing demand. Indeed, these architectures make it possible to add the hardware/software resources offering a virtually infinite storage and computation capacities. However, the design of distributed applications for data grids remains complex.
Data Grid is geographically distributed environment that deals with large-scale data-intensive applications (Elkhatib & Edwards, 2015). In data grid, faults and disconnections of machines are common and can lead to data loss. Therefore, it is necessary to take into account the dynamic nature of the grids since the various data may disappear at any time. To meet the needs for scalability, fast access, most Data Grids support data replication to point within the distributed storage architecture. The use of replicas allows multiple users faster data access while conserving the bandwidth since replica can often be placed strategically close sites where users need them.
A good job scheduling (Nedaei, 2018) can reduce the amount of transferred data by placing a job to where the needed data are present. The decision of where and when to execute a job is made by considering its requirement and current status of the Grid such as computational, storage and network resources. Vice versa, replication will offer a faster access to required files by grid jobs, hence it increases the performance of job execution (Huang et al., 2013).
For a long time, data replication and job scheduling problems have been studied separately and those in Data Grids have just recently received devotion from researchers. Job scheduling has its own complicated features since it deals with a large amount of input data in the dynamic environment of Grids.
In order to best exploit the available resources in Data Grid, it seems necessary to design a strategy combining job scheduling and dynamic replica placement. The work presented in this paper is a solution to this problem. It consists of combining the two concepts in a dynamic strategy, called ClusOptimizer, while basing on MapReduce-driven clustering. The use of MapReduce driven can optimize the cost of data transfer and the tasks’ execution while organizing the jobs’ scheduling.
The objective is to design and implement an optimal dynamic data replication for job scheduling strategy based on parallel clustering. OptorSim simulator has been chosen (Datagrid, 2014) because it shows its usefulness as a grid simulator both in its current features and in the case of adaptability with new scheduling and replication strategies.
This paper focuses mainly on highlighting the main challenges of the high-performance computing. A thorough research on related technologies and also problem analysis are made. This paper proposes ClusOptimizer concept that consists of the replica placement and the job scheduling. The goal of ClusOptimizer is optimized of the cost of data transfer and task execution. The efficiency of the proposed algorithms is detailed in this article. In addition, the latest optimizer was used as a comparison with the experiment. So, the performance of the proposed optimizer is proved in the paper.
The distributed data, the large scale of grid and the dynamite sites cause the problem of remote access and data availability in Bioinformatics. These settings are extremely important in the context of biological grids where the processing time is very short and the user access is common.
In order to not exceed the threshold of processing time, it is necessary to maintain the data available at all times according to the frequency of the requests. For this purpose, the set of replication placement must be proposed in this type of architecture to prove the performance of the proposed replica placement strategy for Bioinformatics.