Grid Computing Initiatives in India

Grid Computing Initiatives in India

Jyotsna Sharma
DOI: 10.4018/978-1-60566-184-1.ch029
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Efforts in Grid Computing, both in academia and industry, continue to grow rapidly worldwide for research, scientific and commercial purposes. Building a commanding position in Grid computing is crucial for India. The major Indian National Grid Computing initiative is GARUDA. Other major efforts include the BIOGRID and VISHWA. Several Indian IT companies too are investing a lot into the research and development of grid computing technology. Though grid computing is presently at a fairly nascent stage, it is seen as a cutting edge technology. This chapter presents the state-of-the-art of grid computing technology and the India’s efforts in developing this emerging technology.
Chapter Preview
Top

Background

The term ‘Grid Computing’ is relatively new and means a lot of different things to a lot of different people (Jennifer, 2003). The grid concepts and technologies were first expressed by Foster and Kesselman in 1998. Built on the pervasive Internet standards, grid computing enables research-oriented organizations to solve problems that were infeasible to solve due to computing and data-integration constraints. Grids also reduce costs through automation and improved IT resource utilization. Grids help optimize the infrastructure to balance workloads and provide extra capacity for high-demand applications (Chawla, 2007). Grid computing can increase an organization’s agility, enabling more efficient business processes and greater responsiveness to changing business and market demands.

Grid computing uses the resources of several computers connected by a network (usually the internet) to solve large-scale computation problems. These computers need not be the powerful supercomputers or mainframes. They could be the personal computers, running different operating systems on many hardware platforms. A study showed that more than 90% of the computer power remained free most of the time in case of normal desktops (Chopra, 2007). This idle time on several thousands of computers throughout the world is used through the scheme of CPU scavenging to handle applications that would otherwise require the power of expensive supercomputers. In the SETI@home project and others like it, volunteers around the world allow their computers to be used for scientific research which shows that some people are willing to share for no direct benefit to themselves (Anderson, 2002; SETI@Home, n.d.). People on the internet can be motivated to contribute their idle resources (Abramson, 2000). The wide variety of resources distributed geographically, are used as a single unified resource which is known as the ‘computational grid’ (Baker, 2000).

Key Terms in this Chapter

Middleware: It is the software that manages activity on the Grid like enabling the user to access computers distributed over the network and organizing/integrating the disparate computational resources of the Grid into a coherent whole. The middleware is conceptually in between the two types of software (operating systems and applications software).

Metacomputing: A particular type of distributed computing which involved linking up supercomputer centers with what was, at the time, high speed networks

Virtual Organizations: Virtual Organization is a group of individuals or institutions who share the computing resources of a “grid” for a common goal.

CPU-Scavenging/Cycle-Scavenging: A technique that makes use of instruction cycles on desktop computers that would otherwise be wasted at night,during lunch, or even in the scattered seconds throughout the day when the computer is waiting for user input or slow devices

Grid Computing: A type of computing which relies on complete computers connected by a conventional network interface, to allow organizations to provision and scale resources as needs arise, thereby preventing the underutilization of resources (computers, networks, data archives, instruments)

High-Performance Technical Computing (HPTC): It refers to the engineering applications of cluster-based computing (such as computational fluid dynamics and the building and testing of virtual prototypes).

High Performance Computing (HPC): HPC refers to the use of supercomputers and computer clusters, that is, computing systems(in or above the teraflop-region) comprised of multiple processors linked together in a single system with commercially available interconnects

Distributed Computing: A computer processing method in which different parts of a program run simultaneously on two or more computers communicating with each other over a network.

Virtual private network (VPN): VPN is a private communications network often used by companies or organizations to communicate confidentially over a public network.

Complete Chapter List

Search this Book:
Reset