Cloud, Grid and High Performance Computing: Emerging Applications
Book Citation Index

Cloud, Grid and High Performance Computing: Emerging Applications

Emmanuel Udoh (Sullivan University, USA)
Release Date: June, 2011|Copyright: © 2011 |Pages: 412
ISBN13: 9781609606039|ISBN10: 1609606035|EISBN13: 9781609606046|DOI: 10.4018/978-1-60960-603-9


Continuing to stretch the boundaries of computing and the types of problems computers can solve, high performance, cloud, and grid computing have emerged to address increasingly advanced issues by combining resources.

Cloud, Grid and High Performance Computing: Emerging Applications offers new and established perspectives on architectures, services and the resulting impact of emerging computing technologies. Intended for professionals and researchers, this publication furthers investigation of practical and theoretical issues in the related fields of grid, cloud, and high performance computing.

Topics Covered

The many academic areas covered in this publication include, but are not limited to:

  • Computational grids
  • Grid-Based Medical Applications
  • Grid-Enabling Applications
  • Mobile Grids
  • Parameter Sweep Applications
  • Privacy-Enhancing Data Access
  • Secure Data Mining
  • Supercomputers in Grids
  • Wireless Grids

Table of Contents and List of Contributors

Search this Book:


Cloud computing has emerged as the natural successor of the different strands of distributed systems - concurrent, parallel, distributed, and Grid computing. Like a killer application, cloud computing is causing governments and the enterprise world to embrace distributed systems with renewed interest. In evolutionary terms, clouds herald the third wave of Information Technology, in which virtualized resources (platform, infrastructure, software) are provided as a service over the Internet. This economic front of cloud computing, whereby users are charged based on their usage of computational resources and storage, is driving its current adoption and the creation of opportunities for new service providers. As can be gleaned from press releases, the US government has registered strong interest in the overall development of cloud technology for the betterment of the economy.

The transformation enabled by cloud computing follows the utility pricing model (subscription/metered approach) in which services are commoditized as practiced in electricity; water, telephony and gas industries. This approach follows a global vision in which users plug their computing devices into the Internet and tap into as much processing power as needed. Essentially, a customer (individual or organization) gets computing power and storage, not from his/her computer, but over the Internet on demand.

Cloud technology comes in different flavors: public, private, and hybrid clouds. Public clouds are provided remotely to users from third-party controlled data centers, as opposed to private clouds that are more of virtualization and service-oriented architecture hosted in the traditional settings by corporations. It is obvious that the economies of scale of large data centers (vendors like Google) offer public clouds an economic edge over private clouds. However, security issues are a major source of concerns about public clouds, as organizations will not distribute resources randomly on the Internet, especially their prized databases, without a measure of certainty or safety assurance. In this vein, private clouds will persist until public clouds mature and garner corporate trust.

The embrace of cloud computing is impacting the adoption of Grid technology. The perceived usefulness of Grid computing is not in question, but other factors weigh heavily against its adoption such as complexity and maintenance as well as the competition from clouds. However, the Grid might not be totally relegated to the background as it could complement research in the development of cloud middleware (Udoh, 2010). In that sense, this book considers and foresees other distributed systems not necessarily standing alone as entities as before, but largely subordinate and providing research stuff to support and complement the increasingly appealing cloud technology.

The new advances in cloud computing will greatly impact IT services, resulting in improved computational and storage resources as well as service delivery. To keep educators, students, researchers, and professionals abreast of advances in the cloud, Grid, and high performance computing, this book series Cloud, Grid, and High Performance Computing: Emerging Applications will provide coverage of topical issues in the discipline. It will shed light on concepts, protocols, applications, methods, and tools in this emerging and disruptive technology. The book series is organized in four distinct sections, covering wide-ranging topics: (1) Introduction (2) Scheduling (3) Security and (4) Applications.

Section I, Introduction, provides an overview of supercomputing and the porting of applications to Grid and cloud environments. Cloud, Grid and high performance computing are firmly dependent on the information and communication infrastructure. The different types of cloud computing - software-as-a-service (SaaS), platform-as-a-service (PaaS), infrastructure-as-a-service (IaaS), and the data centers exploit commodity servers and supercomputers to serve the current needs of on-demand computing. The chapter Supercomputers in Grids by Michael Resch and Edgar Gabriel, focuses on the integration and limitations of supercomputers in Grid and distributed environments. It emphasizes the understanding and interaction of supercomputers as well as its economic potential as demonstrated in a public-private partnership project. As a matter of fact, with the emergence of cloud computing, the need for supercomputers in data centers cannot be overstated. In a similar vein, Porting HPC Applications to Grids and Clouds by Wolfgang Gentzsch guides users through the important stages of porting applications to Grids and clouds as well as the challenges and solutions. Porting and running scientific grand challenge applications on the DEISA Grid demonstrated this approach. This chapter equally gave an overview of future prospects of building sustainable Grid and cloud applications. In another chapter, Grid-Enabling Applications with JGRIM, researchers Cristian Mateos and Alejandro Zunino recognize the difficulties in building Grid applications. To simplify the development of Grid applications, the researchers developed JGRIM, which easily Gridifies Java applications by separating functional and Grid concerns in the application code. JGRIM simplifies the process of porting applications to the Grid, and is competitive with similar tools in the market.

Section II, Scheduling, is a central component in the implementation of Grid and cloud technology. Efficient scheduling is a complex and an attractive research area, as priorities and load balancing have to be managed.  Sometimes, fitting jobs to a single site may not be feasible in Grid and cloud environments, requiring the scheduler to improve allocation of parallel jobs for efficiency. In Moldable Job Allocation for Handling Resource Fragmentation in Computational Grid, Huang, Shih, and Chung exploited the moldable property of parallel jobs in formulating adaptive processor allocation policies for job scheduling in Grid environment. In a series of simulations, the authors demonstrated how the proposed policies significantly improved scheduling performance in heterogeneous computational Grid. In another chapter, Speculative Scheduling of Parameter Sweep Applications Using Job Behavior Descriptions, Ulbert, Lorincz, Kozsik, and Horvath demonstrated how to estimate job completion times that could ease decisions in job scheduling, data migration, and replication. The authors discussed three approaches of using complex job descriptions for single and multiple jobs. The new scheduling algorithms are more precise in estimating job completion times.

Furthermore, some applications with stringent security requirements pose major challenges in computational Grid and cloud environments. To address security requirements, in A Security Prioritized Computational Grid Scheduling Model: An Analysis, Rekha Kashyap and Deo Vidyarthi proposed a security aware computational scheduling model that modified an existing Grid scheduling algorithm. The proposed Security Prioritized MinMin showed an improved performance in terms of makespan and system utilization. Taking a completely different bearing in scheduling, Zahid Raza and Deo Vidyarthi in the chapter A Replica Based Co-Scheduler (RBS) for Fault Tolerant Computational Grid, developed a biological approach that incorporates genetic algorithm (GA). This natural selection and evolution method optimizes scheduling in computational Grid by minimizing turnaround time. The developed model, which compared favorably to existing models, was used to simulate and evaluate clusters to obtain the one with minimum turnaround time for job scheduling. As the cloud environments expand to the corporate world, improvements in GA methods could find use in some search problems.

Section III, Security, is one of the major hurdles cloud technology must overcome before any widespread adoption by organizations. Cloud vendors must meet the transparency test and risk assessment in information security and recovery. Falling short of these requirements might leave cloud computing frozen in private clouds. Preserving user privacy and managing customer information, especially personally identifiable information, are central issues in the management of IT services. Wolfgang Hommel, in the chapter A Policy-based Security Framework for privacy-enhancing Data Access and Usage Control, discusses how recent advances in privacy enhancing technologies and federated identity management can be incorporated in Grid environments. The chapter demonstrates how existing policy-based privacy management architectures could be extended to provide Grid-specific functionality and integrated into existing infrastructures (demonstrated in an XACML-based privacy management system).

In Adaptive Control of Redundant Task Execution for Dependable Volunteer Computing, Wang, Takizawa, and Kobayashi examined the security features that could enable Grid systems to exploit the massive computing power of volunteer computing systems. The authors proposed the use of cell processor as a platform that could use hardware security features. To test the performance of such a processor, a secure, parallelized, K-Means clustering algorithm for a cell was evaluated on a secure system simulator. The findings point to possible optimization for secure data mining in the Grid environments.

To further provide security in Grid and cloud environments, Shreyas Cholia and Jefferson Porter discussed how to close the loopholes in the provisioning of resources and services in Publication and Protection of Sensitive Site Information in a Grid Infrastructure. The authors analyzed the various vectors of information being published from sites to Grid infrastructures, especially in the Open Science Grid, including resource selection, monitoring, accounting, troubleshooting, logging, and site verification data. Best practices and recommendations were offered to protect sensitive data that could be published in Grid infrastructures.

Authentication mechanisms are common security features in cloud and Grid environments, where programs inter-operate across domain boundaries. Public key infrastructures (PKIs) provide means to securely grant access to systems in distributed environments, but as PKIs grow, systems become overtaxed to discover available resources especially when certification authority is foreign to the prevailing environment. Massimiliano Pala proposed, in Federated PKI Authentication in Computing Grids: Past, Present, and Future  a new authentication model that incorporates PKI resource query protocol into the Grid security infrastructure that will as well find utility in the cloud environments. Mobile Grid systems and its security are a major source of concern, due to its distributed and open nature. Rosado, Fernandez-Medina, Lopez, and Piatini present a case study of the application of a secured methodology to a real mobile system in Identifying Secure Mobile Grid Use Cases.

Furthermore, Noordende, Olabarriaga, Koot, and Laat developed a trusted data storage infrastructure for Grid-based medical applications. In Trusted Data Management for Grid-Based Medical Applications, while taking cognizance of privacy and security aspects, they redesigned the implementation of common Grid middleware components, which could impact the implementation of cloud applications as well.

Section IV, Applications, are increasingly deployed in the Grid and cloud environments. The architecture of Grid and cloud applications is different from the conventional application models and, thus requires a fundamental shift in implementation approaches. Cloud applications are even more unique as they eliminate installation, maintenance, deployment, management, and support. These cloud applications are considered Software as a Service (SaaS) applications. Grid applications are forerunners to clouds and are still common in scientific computing. A biological application was introduced by Heinz Stockinger and co-workers in a chapter titled Large-Scale Co-Phylogenetic Analysis on the Grid. Phylogenetic data analysis is known to be compute-intensive and suitable for high performance computing. The authors improved upon an existing sequential and parallel AxParafit program, by producing an efficient tool that facilitates large-scale data analysis. A free client tool is available for co-phylogenetic analysis.

In chapter Persistence and Communication State Transfer in an Asynchronous Pipe Mechanism by Philip Chan and David Abramson, the researchers described distributed algorithm for implementing dynamic resource availability in an asynchronous pipe mechanism that couples workflow components. Here, fault-tolerant communication was made possible by persistence through adaptive caching of pipe segments while providing direct data streaming. Ashish Agarwal in another chapter, Self-Configuration and Administration of Wireless Grids, described the peculiarities of wireless Grids such as the complexities of the limited power of the mobile devices, the limited bandwidth, standards and protocols, quality of service, and the increasingly dynamic nature of the interactions involved. To meet these peculiarities, the researcher proposed a Grid topology and naming service that self-configures and self-administers various possible wireless Grid layouts.

In chapters Distributed Dynamic Load Balancing in P2P Grid Systems by Yu, Huang, and Lai and An Ontology-Based P2P Network for Semantic Search by Gu, Zhang, and Pung, the researchers explored the potentials and obstacles confronting P2P Grids. Lai, Wu, and Lin described the effective utilization of P2P Grids in efficient scheduling of jobs by examining a P2P communication model. The model aided job migration technology across heterogeneous systems and improved the usage of distributed computing resources. On the other hand, Gu, Zhang, and Pung dwelt on facilitating efficient search for data in distributed systems using an ontology-based peer-to-peer network. Here, the researchers grouped together data with the same semantics into one-dimensional semantic ring space in the upper-tier network. In the lower-tier network, peers in each semantic cluster were organized as chord identifier space. The authors demonstrated the effectiveness of the proposed scheme through simulation experiment.

In this final section, there are other chapters that capture the research trends in the realm of high performance computing. In computational Grid and cloud resource provisioning, memory usage may sometimes be overtaxed. Although RAM Grid can be constrained sometimes, it provides remote memory for the user nodes that are short of memory. Researchers Rui Chu, Nong Xiao, and Xicheng Lu, in the chapter Push-based Prefetching in Remote Memory Sharing System, propose the push-based prefetching to enable the memory providers to push the potential useful pages to the user nodes. With the help of sequential pattern mining techniques, it is expected that useful memory pages for prefetching can be located. The authors verified the effectiveness of the proposed method through trace-driven simulations.

In another high performance computing undertaking, researchers Djamel Tandjaoui, Messaoud Doudou, and Imed Romdhani proposed a new hybrid MAC protocol, named H-MAC, for wireless mesh networks. The protocol exploits channel diversity and a medium access control method in ensuring the quality of service requirement. Using ns-2 simulator, the researchers implemented and compared H-MAC with other MAC protocol used in Wireless Network and found that H-MAC performs better compared to Z-MAC, IEEE 802.11 and LCM-MAC.

IP telephony has emerged as the most widely used peer-to-peer-based application. Although success has been recorded in decentralized communication, providing a scalable peer-to-peer-based distributed directory for searching user entries still poses a major challenge. In a chapter titled A Decentralized Directory Service for Peer-to-Peer-Based Telephony, researchers - Fabian Stäber, Gerald Kunzmann, and Jörg Müller, proposed the Extended Prefix Hash Tree algorithm that can be used to implement an indexing infrastructure supporting range queries on top of DHTs.

In conclusion, cloud technology is the latest iteration of information and communications technology driving global business competitiveness and economic growth. Although relegated to the background, research in Grid technology fuels and complements activities in cloud computing, especially in the middleware technology. In that vein, this book series is a contribution to the growth of cloud technology and global economy, and indeed the information age.

Emmanuel Udoh
Indiana Institute of Technology, USA

Author(s)/Editor(s) Biography

Emmanuel Udoh is currently Dean and Professor, College of Information and Computer Technology, Sullivan University, USA. Prior to his current position, Dr. Udoh was the Chair/Director of the IT Department at National College and an Assistant Professor of Computer Science at Indiana University-Purdue University in Fort Wayne. Dr. Udoh holds two doctoral degrees, one in Information Technology from Capella University and one in Geology from Erlangen University in Germany. He also holds an MBA from Capella, an MS in Computer Science from Troy University in Alabama, an MS in Geology from Muenster University in Germany and a BS in Geology from the University of Ife (OAU) in Nigeria. Dr. Udoh is the author of six books and numerous peer-reviewed articles in IT. Dr. Udoh has been listed in American Marquis Who’s Who in the World (1993-1994).