Shopping Cart | Login | Register | Language: English

Data Intensive Computing with Clustered Chirp Servers

Copyright © 2012. 15 pages.
OnDemand Chapter PDF Download
Download link provided immediately after order completion
$37.50
Available. Instant access upon order completion.
DOI: 10.4018/978-1-61520-971-2.ch006
Sample PDFCite

MLA

Thain, Douglas, Michael Albrecht, Hoang Bui, Peter Bui, Rory Carmichael, Scott Emrich and Patrick Flynn. "Data Intensive Computing with Clustered Chirp Servers." Data Intensive Distributed Computing: Challenges and Solutions for Large-scale Information Management. IGI Global, 2012. 140-154. Web. 24 Apr. 2014. doi:10.4018/978-1-61520-971-2.ch006

APA

Thain, D., Albrecht, M., Bui, H., Bui, P., Carmichael, R., Emrich, S., & Flynn, P. (2012). Data Intensive Computing with Clustered Chirp Servers. In T. Kosar (Ed.), Data Intensive Distributed Computing: Challenges and Solutions for Large-scale Information Management (pp. 140-154). Hershey, PA: Information Science Reference. doi:10.4018/978-1-61520-971-2.ch006

Chicago

Thain, Douglas, Michael Albrecht, Hoang Bui, Peter Bui, Rory Carmichael, Scott Emrich and Patrick Flynn. "Data Intensive Computing with Clustered Chirp Servers." In Data Intensive Distributed Computing: Challenges and Solutions for Large-scale Information Management, ed. Tevfik Kosar, 140-154 (2012), accessed April 24, 2014. doi:10.4018/978-1-61520-971-2.ch006

Export Reference

Mendeley
Favorite
Data Intensive Computing with Clustered Chirp Servers
Access on Platform
Browse by Subject
Top

Abstract

Over the last few decades, computing performance, memory capacity, and disk storage have all increased by many orders of magnitude. However, I/O performance has not increased at nearly the same pace: a disk arm movement is still measured in milliseconds, and disk I/O throughput is still measured in megabytes per second. If one wishes to build computer systems that can store and process petabytes of data, they must have large numbers of disks and the corresponding I/O paths and memory capacity to support the desired data rate. A cost efficient way to accomplish this is by clustering large numbers of commodity machines together. This chapter presents Chirp as a building block for clustered data intensive scientific computing. Chirp was originally designed as a lightweight file server for grid computing and was used as a “personal” file server. The authors explore building systems with very high I/O capacity using commodity storage devices by tying together multiple Chirp servers. Several real-life applications such as the GRAND Data Analysis Grid, the Biometrics Research Grid, and the Biocompute Facility use Chirp as their fundamental building block, but provide different services and interfaces appropriate to their target communities.
Chapter Preview
Top

Introduction

It is not enough to have raw hardware for data intensive computing. System software is also needed to manage the system and make it accessible to users. If data is scattered all over the disks of a cluster, then it must be tracked, replicated, and periodically validated to ensure survival in the face of failures. Programs that execute on the system must be able to locate the relevant data and preferably execute on the same node where it is located. Users must have a reasonably simple interface for direct data access as well as for computation on the data itself.

Over the last five years, we have gained experience in designing, building, and operating data intensive clusters at the University of Notre Dame. Working closely with domain experts in physics, biometrics, and bioinformatics, we have created several novel systems that collect data captured by digital instruments, make it easy to search and access, and provide high level facilities for processing the data in ways that were not previously possible. This chapter outlines our experience with each of these repositories.

Chirp (Thain, Moretti, & Hemmes, 2009) is our building block for clustered data intensive scientific computing. Chirp was originally designed as a lightweight file server for grid computing. It was first used as a “personal” file server that could be easily deployed on a user’s home machine or temporarily on a computing grid to allow a remotely executing application to access its data. However, we quickly discovered that the properties that made Chirp suitable for personal use – rapid deployment, flexible security, and resource management – also made it very effective for building storage clusters. By tying together multiple Chirp servers, we could easily build a system with very high I/O capacity using commodity storage devices.

Chirp is currently deployed on a testbed of over 300 commodity storage devices at the University of Notre Dame, in is in active use at other research institutions around the world. Using this testbed, we have constructed a number of scalable data storage and analysis systems. Each uses Chirp as its fundamental building block, but provides different services and interfaces appropriate to the target community:

  • The GRAND Data Analysis Grid provides a scalable archive for data produced by the GRAND cosmic ray experiment at Notre Dame. It provides a conventional filesystem interface for direct access, a shell-like capability for parallel processing, and a web portal for high level data exploration.

  • The Biometrics Research Grid archives all of the photographic and video data collected in the lab by the Computer Vision Research Lab at Notre Dame. It provides a combination database-filesystem interface for batch processing, several abstractions for high level experimental work, and a web portal for data exploration.

  • The Biocompute facility provides a web interface to large scale parallel bioinformatics applications. Large input databases are obtained from national repositories and local experimental facilities, and replicated across opportunistic storage devices. Users may run large queries using standard tools which are decomposed and executed across the storage cluster.

In this chapter, we will explain the architecture of each of these systems, along with our experience in constructing, deploying, and using each one. We begin by describing the fundamental building block, the Chirp file server.

Top

The Chirp File Server

Chirp is a practical global filesystem designed for cluster and grid computing. An overview of the main components of the filesystem is shown in the Figure 1. The core component of the system is the chirp server, which is a user-level process that runs as an unprivileged user and exports an existing local Unix filesystem. That is, the actual files exported by a Chirp server are stored on the underlying filesystem as regular files and directories. Although this limits the capacity and performance of the Chirp server to that of the underlying kernel-level filesystem, this approach has the advantage of allowing existing data to be exported by the Chirp server without requiring importing or moving the data into the Chirp storage namespace.

Figure 1.

Overview of the Chirp Filesystem

Top

Complete Chapter List

Search this Book: Reset
Table of Contents
Preface
Tevfik Kosar
Chapter 1
Esma Yildirim, Mehmet Balman, Tevfik Kosar
With the continuous increase in the data requirements of scientific and commercial applications, access to remote and distributed data has become a... Sample PDF
Data-Aware Distributed Computing
$37.50
Chapter 2
Ioan Raicu, Ian Foster, Yong Zhao, Alex Szalay, Philip Little, Christopher M. Moretti, Amitabh Chaudhary, Douglas Thain
Many-task computing aims to bridge the gap between two computing paradigms, high throughput computing and high performance computing. Traditional... Sample PDF
Towards Data Intensive Many-Task Computing
$37.50
Chapter 3
Arcot Rajasekar, Mike Wan, Reagan Moore, Wayne Schroeder
Service-oriented architectures (SOA) enable orchestration of loosely-coupled and interoperable functional software units to develop and execute... Sample PDF
Micro-Services: A Service-Oriented Paradigm for Scalable, Distributed Data Management
$37.50
Chapter 4
Sudharshan S. Vazhkudai, Ali R. Butt, Xiaosong Ma
In this chapter, the authors present an overview of the utility of distributed storage systems in supporting modern applications that are... Sample PDF
Distributed Storage Systems for Data Intensive Computing
$37.50
Chapter 5
Ismail Akturk, Xinqi Wang, Tevfik Kosar
The unbounded increase in the size of data generated by scientific applications necessitates collaboration and sharing among the nation’s education... Sample PDF
Metadata Management in PetaShare Distributed Storage Network
$37.50
Chapter 6
Douglas Thain, Michael Albrecht, Hoang Bui, Peter Bui, Rory Carmichael, Scott Emrich, Patrick Flynn
Over the last few decades, computing performance, memory capacity, and disk storage have all increased by many orders of magnitude. However, I/O... Sample PDF
Data Intensive Computing with Clustered Chirp Servers
$37.50
Chapter 7
Suraj Pandey, Rajkumar Buyya
This chapter presents a comprehensive survey of algorithms, techniques, and frameworks used for scheduling and management of data-intensive... Sample PDF
A Survey of Scheduling and Management Techniques for Data-Intensive Application Workflows
$37.50
Chapter 8
Ewa Deelman, Ann Chervenak
Scientific applications such as those in astronomy, earthquake science, gravitational-wave physics, and others have embraced workflow technologies... Sample PDF
Data Management in Scientific Workflows
$37.50
Chapter 9
Ann Chervenak, Robert Schuler
Management of the large data sets produced by data-intensive scientific applications is complicated by the fact that participating institutions are... Sample PDF
Replica Management in Data Intensive Distributed Science Applications
$37.50
Chapter 10
Judy Qiu, Jaliya Ekanayake, Thilina Gunarathne, Jong Youl Choi, Seung-Hee Bae, Yang Ruan, Saliya Ekanayake, Stephen Wu, Scott Beason, Geoffrey Fox, Mina Rho, Haixu Tang
Data intensive computing, cloud computing, and multicore computing are converging as frontiers to address massive data problems with hybrid... Sample PDF
Data Intensive Computing for Bioinformatics
$37.50
Chapter 11
Jason Leigh, Andrew Johnson, Luc Renambot, Venkatram Vishwanath, Tom Peterka, Nicholas Schwarz
An effective visualization is best achieved through the creation of a proper representation of data and the interactive manipulation and querying of... Sample PDF
Visualization of Large-Scale Distributed Data
$37.50
Chapter 12
Huadong Liu, Jinzhu Gao, Jian Huang, Micah Beck, Terry Moore
The emergence of high-resolution simulation, where simulation outputs have grown to terascale levels and beyond, raises major new challenges for the... Sample PDF
On-Demand Visualization on Scalable Shared Infrastructure
$37.50