Grids, Clouds, and Massive Simulations

Grids, Clouds, and Massive Simulations

Levente Hajdu (Brookhaven National Laboratory, USA), Jérôme Lauret (Brookhaven National Laboratory, USA) and Radomir A. Mihajlović (New York Institute of Technology, USA)
DOI: 10.4018/978-1-4666-5784-7.ch013
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

In this chapter, the authors discuss issues surrounding High Performance Computing (HPC)-driven science on the example of Peta science Monte Carlo experiments conducted at the Brookhaven National Laboratory (BNL), one of the US Department of Energy (DOE) High Energy and Nuclear Physics (HENP) research sites. BNL, hosting the only remaining US-based HENP experiments and apparatus, seem appropriate to study the nature of the High-Throughput Computing (HTC) hungry experiments and short historical development of the HPC technology used in such experiments. The development of parallel processors, multiprocessor systems, custom clusters, supercomputers, networked super systems, and hierarchical parallelisms are presented in an evolutionary manner. Coarse grained, rigid Grid system parallelism is contrasted by cloud computing, which is classified within this chapter as flexible and fine grained soft system parallelism. In the process of evaluating various high performance computing options, a clear distinction between high availability-bound enterprise and high scalability-bound scientific computing is made. This distinction is used to further differentiate cloud from the pre-cloud computing technologies and fit cloud computing better into the scientific HPC.
Chapter Preview
Top

Introduction

Modern science is hard to imagine without massive computing support. Almost all recent significant discoveries were in a way a consequence of great computation effort. Times of external wise Aristotelian observation and insightful scientific postulates, based on minimal measurements, are mostly a thing of the past. Scientific work and discoveries of Archimedes, Isaac Newton or Galileo, based on a few manual computations we may consider as legacy science.

Throughout the later history of science, computing and computing machines have gradually crept into the scientific process. For instance, early sixteenth century astronomer Nicolaus Copernicus, in search of heavenly body trajectories, or nineteenth century geneticist Gregor Mendel, who was crossbreeding pea plants and describing their inherited traits, used pen and paper and abacus’ to perform numerical processing of their measurement data. The mechanical calculators of the day, more sophisticated than the abacus, have started to emerge as scientific tools about the time of European renaissance. The seventeenth century has brought mechanical computing devices such as: multiplying bones, John Napier’s bones, the slide rule, Pascaline, and Leibniz stepped-drum (Redin, 2012). Charles Xavier Thomas's arithmometer J. H. Muller's difference engine, Charles Babbage's analytical engine, and Thomas Fowler's ternary calculator of 1875, have followed, with much higher calculating power and even rudimentary programmability. One hundred years later, James D. Watson and Francis Crick have discovered the structure of DNA, and in 1990 the Human Genome Project has started identifying and mapping the three billion chemical base pairs that make up human DNA. DNA and genomic research tasks would not have been conceivable without modern electronic computing technology.

Geneticists were fortunate in as much as the computing technology required for the field has become available before the need for it emerged, or maybe, the research necessity has caused the introduction of adequate electronic computing. One may state that geneticists with their marvelous discoveries are standing on the shoulders, not of great biologist of the past, but on the shoulder of Babbage, Turing, von Neumann, and all others that have made modern electronic computing possible. This statement applies to almost all modern scientific disciplines, as a sort of meta science, computing science serves as a driving force propelling modern science in general. Among all scientific fields, the most prominent one, the true field from which necessity has promoted the greatest computing discoveries, is physics, or more precisely, high energy and nuclear physics.

Working on the famous Manhattan Project (Manhattan Project Hall of Fame Directory, 2005), John von Neumann (in whose honor the Von Neumann computer architecture is named), Stanislaw Ulam, and others, faced the problem of how far a neutron would travel through a material before colliding with an atomic nucleus. The geometry was known along with all of the base data for the problem, however no deterministic mathematical solution could be found. Consequently, a quite unusual statistical technique was envisioned by Stan Ulam (Metropolis & Ulam, 1949). Since it required huge computational support, available only on electronic computers of that time, the implementation of the technique was delegated to Von Neumann. Due to the intense computations needed, one of the rare experts on electronic computing in the early 1940’s, the key innovator in the field, John Von Neumann was invited to handle the problem. In March, 1945, at the Moore School of Electrical Engineering at the University of Pennsylvania, John von Neumann, nick named Johnny, also known as the first computer hacker ever (Myhrvold, 1999), professor of Mathematics at the Institute for Advanced Study and a consultant to Los Alamos Nuclear Center, with several associates, has initiated on the ENIAC machine, the first computer simulation project ever. The code name for the project was “Monte Carlo” (Metropolis, 1987). The method used in the project, also named “Monte Carlo” is still used today in a wide range of fields, from nuclear research, particle physics, modeling the flows of fluids, complex systems engineering to even finance and financial engineering.

This chapter is dedicated to the tight coupling relationship between high performance computing (HPC), massive computing simulations of the Monte Carlo type and high energy physics. By presenting an abbreviated historical background and some experience gained at Brookhaven National Laboratory, we illustrate the inseparable relationship between experimental physics and computing.

Key Terms in this Chapter

Power Wall: Is used to describe the limitation of CPU clock rate, CPU design, and CPU performance improvements due to the thermal and electrical power constraints, i.e., technological inability to use higher clock rates, more transistor switching elements, to use higher volumes of electrical energy and inability to maintain overall thermal stability.

Cluster: Is the preconfigured computing service distribution system powered by the computing and networking resources at one physical site.

Multi-Core Processor: Is a single processing unit containing two or more single computing thread servicing units known as cores.

Petascale Science: Refers to the computation intensive scientific research requiring hardware performance in excess of one petaflops (i.e. one quadrillion floating point operations per second). Alternative definition assumes that the petascale science hardware performance must be sufficient to execute the standard LINPACK benchmark.

Software Pressure (Sp) Factor: Is a modern computing technology market driving phenomena, manifested as almost exponential market demand for higher throughput and lower cost systems, running software of increased complexity and processing larger volumes of data.

Semiconductor Wall: Is a CPU capacity limitation factor representing semiconductor technology problems in manufacturing of switching transistor devices of reduced dimensions, able to use lower signal voltage levels and higher signaling rates.

Grid Computing: Is the preconfigured general-purpose static computing services distribution system powered by the managed computing resources from multiple physical locations.

Von Neumann Bottleneck (VNB): Is the computing system throughput limitation due to inadequate rate of data transfer between memory and the CPU. The VNB causes CPU to wait and idle for a certain amount of time while low speed memory is being accessed. The VNB is named after John von Neumann, a computer scientist who was credited with the invention of the bus based computer architecture. To allow faster memory access, various distributed memory “non-von” systems were proposed.

Cloud Computing: Is on-demand general-purpose dynamic computing service distribution system. This definition generalizes Cloud computing previously defined as being service-based computing, scalable and elastic, shared, metered by use, and delivered using internet technologies.

High Performance Computing (HPC): Is a computing environment capable of delivering large processing capacity with low-latency large data storage in a form of a supercomputer, computer cluster, grid or cloud computing system. HPC also refers to a supercomputing environment in the teraflops (Trillion floating point operations per second) processing and the petabyte (1024 terabyte) storage range.

Complete Chapter List

Search this Book:
Reset