Benchmarking Grid Applications for Performance and Scalability Predictions

Benchmarking Grid Applications for Performance and Scalability Predictions

Radu Prodan, Farrukh Nadeeem, Thomas Fahringer
Copyright: © 2010 |Pages: 33
DOI: 10.4018/978-1-60566-661-7.ch005
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Application benchmarks can play a key role in analyzing and predicting the performance and scalability of Grid applications, serve as an evaluation of the fitness of a collection of Grid resources for running a specific application or class of applications (Tsouloupas & Dikaiakos, 2007), and help in implementing performance-aware resource allocation policies of real time job schedulers. However, application benchmarks have been largely ignored due to diversified types of applications, multi-constrained executions, dynamic Grid behavior, and heavy computational costs. To remedy these, the authors present an approach taken by the ASKALON Grid environment that computes application benchmarks considering variations in the problem size of the application and machine size of the Grid site. Their system dynamically controls the number of benchmarking experiments for individual applications and manages the execution of these experiments on different Grid sites. They present experimental results of our method for three real-world applications in the Austrian Grid environment.
Chapter Preview
Top

Introduction

Grid infrastructures provide an opportunity to the scientific and business communities to exploit the powers of heterogeneous resources in multiple administrative domains under a single umbrella (Foster & Kesselman, The Grid: Blueprint for a Future Computing Infrastructure, 2004). Proper characterization of Grid resources is of key importance in effective mapping and scheduling of the jobs in order to minimize execution time of complex workflows and utilize maximum power of these resources.

Benchmarking has been used for many years to characterize a large variety of resources ranging from CPU architectures to file systems, databases, parallel systems, internet infrastructures, or middleware (Dikaiakos, 2007). There have always been issues regarding optimized mapping of jobs to the Grid resources on the basis of available benchmarks (Tirado-Ramos, Tsouloupas, Dikaiakos, & Sloot, 2005). Existing Grid benchmarks (or their combinations) do not suffice to measure/predict application performance and scalability, and give a quantitative comparison of different Grid sites for individual applications while taking into effect variations in the problem size. In addition, there are no integration mechanisms and common units available for existing benchmarks to make meaningful inferences about the performance and scalability of individual Grid applications on different Grid sites.

Application benchmarking on the Grid can provide a basis for users and Grid middleware services (like meta-schedulers (Berman, et al., 2005) and resource brokers (Raman, Livny, & Solomon, 1999)) for optimized mapping of jobs to the Grid resources by serving as an evaluation of fitness to compare different computing resources in the Grid. The performance results obtained from real application benchmarking are much more useful for scheduling these applications on a highly distributed Grid infrastructure than the regular resource information provided by the standard Grid information services (Tirado-Ramos, Tsouloupas, Dikaiakos, & Sloot, 2005) (Czajkowski, Fitzgerald, Foster, & Kesselman, 2001). Application benchmarks are also helpful in predicting the performance and scalability of Grid applications, studying the effects of variations in application performance for different problem sizes, and gaining insights into the properties of computing nodes architectures.

However, the complexity, heterogeneity, and the dynamic nature of Grids raise serious questions about the overall realization and applicability of application benchmarking. Moreover, diversified types of applications, multi-constrained executions, and heavy computational costs make the problem even harder. Above all, mechanizing the whole process of controlling and managing benchmarking experiments and making benchmarks available to users and Grid services in an easy and flexible fashion makes the problem more challenging.

To overcome this situation, we present a three layered Grid application benchmarking system that produces benchmarks for Grid applications taking into effect the variations in problem size and machine size of the Grid sites. Our system provides the necessary support for conducting controlled and reproducible experiments, for computing performance benchmarks accurately, and for comparing and interpreting benchmarking results in the context of application performance and scalability predictions. It takes the specifications of executables, set of problem sizes, pre-execution requirements and the set of available Grid sites in an input in XML format. These XML specifications, along with the available resources are parsed to generate jobs to be submitted to different Grid sites. At first, the system completes pre-experiment requirements like the topological order of activities in a workflow, and then runs the experiments according to the experimental strategy. The benchmarks are computed from experimental results and archived in a repository for later use. Performance and scalability prediction and analysis from the benchmarks are available through a graphical user interface and Web Service Resource Framework (WSRF) (Banks, 2006) service interfaces. We do not require complex integration/analysis of measurements, or new metrics for interpretation of benchmarking results.

Key Terms in this Chapter

Scalability: The ability of a system to either handle growing amounts of work without losing processing speed.

Scheduling: The process of finding an appropriate execution resource to each atomic activity of a large application; scheduling is usually employed for parallel applications, bags of tasks, and workflows and is an NP-complete problem for certain objective functions such as execution time.

Grid: A geographically distributed hardware and software infrastructure that integrates high-end computers, networks, databases, and scientific instruments from multiple sources to form a virtual supercomputer on which users can work collaboratively within virtual organizations.

Scientific Workflow: A large-scale loosely coupled application consisting of a set of commodity off-the-shelf software components (also called tasks or activities) interconnected in a directed graph through control flow and data flow dependencies.

Benchmark: A measurement to be used as a reference value for future calculations such as performance predictions.

Experimental Design: Design of all information gathering exercises where variation is present, whether under the full control of the experimenter or not.

Performance Prediction: Estimation of the execution time of an application for a certain problem size in a certain configuration (e.g. machine size) on the target computer architecture.

Complete Chapter List

Search this Book:
Reset