Refers to the computation intensive scientific research requiring hardware performance in excess of one petaflops (i.e. one quadrillion floating point operations per second). Alternative definition assumes that the petascale science hardware performance must be sufficient to execute the standard LINPACK benchmark.
Published in Chapter:
Grids, Clouds, and Massive Simulations
Levente Hajdu (Brookhaven National Laboratory, USA), Jérôme Lauret (Brookhaven National Laboratory, USA), and Radomir A. Mihajlović (New York Institute of Technology, USA)
Copyright: © 2014
|Pages: 33
DOI: 10.4018/978-1-4666-5784-7.ch013
Abstract
In this chapter, the authors discuss issues surrounding High Performance Computing (HPC)-driven science on the example of Peta science Monte Carlo experiments conducted at the Brookhaven National Laboratory (BNL), one of the US Department of Energy (DOE) High Energy and Nuclear Physics (HENP) research sites. BNL, hosting the only remaining US-based HENP experiments and apparatus, seem appropriate to study the nature of the High-Throughput Computing (HTC) hungry experiments and short historical development of the HPC technology used in such experiments. The development of parallel processors, multiprocessor systems, custom clusters, supercomputers, networked super systems, and hierarchical parallelisms are presented in an evolutionary manner. Coarse grained, rigid Grid system parallelism is contrasted by cloud computing, which is classified within this chapter as flexible and fine grained soft system parallelism. In the process of evaluating various high performance computing options, a clear distinction between high availability-bound enterprise and high scalability-bound scientific computing is made. This distinction is used to further differentiate cloud from the pre-cloud computing technologies and fit cloud computing better into the scientific HPC.