High Performance and Grid Computing Developments and Applications in Condensed Matter Physics

High Performance and Grid Computing Developments and Applications in Condensed Matter Physics

DOI: 10.4018/978-1-4666-5784-7.ch009
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter introduces applications of High Performance Computing (HPC), Grid computing, and development of electronic infrastructures in Serbia, in the South Eastern Europe region, and in Europe as a whole. Grid computing represents one of the key enablers of scientific progress in many areas of research. Main HPC and Grid infrastructures, initiatives, projects and programs in Europe, Partnership for Advanced Computing in Europe (PRACE) and European Grid Initiative (EGI) associations, as well as Academic and Educational Grid Initiative of Serbia (AEGIS) are presented. Further, the chapter describes some of the applications related to the condensed matter physics, developed at the Scientific Computing Laboratory of the Institute of Physics, University of Belgrade.
Chapter Preview
Top

High Performance Computing: Supercomputing

A supercomputer (Hoffman & Traub, 1989) is a computer at the frontline of current data processing capacity, particularly related to the speed of calculation. Supercomputers were introduced in the 1960s and at that time were designed primarily by Seymour Cray at Control Data Corporation (CDC), and later at Cray Research. While the supercomputers of the 1970s used only a few processors, in the 1990s, machines with thousands of processors began to appear and by the end of the 20th century, massively parallel supercomputers with tens of thousands of “off-the-shelf” processors were the norm.

Systems with a massive number of processors generally take one of two paths: in one approach, e.g. in Grid computing (Foster & Kesselman, 2004), the processing power of a large number of computers in distributed, diverse administrative domains, is opportunistically used whenever a computer is available. In another approach, a large number of processors are used in close proximity to each other, e.g. in a computer cluster. The use of multi-core processors combined with centralization is an emerging direction. Currently, Japan's K computer is the fastest in the world.

Supercomputers are used for compute-intensive tasks such as problems including quantum physics, weather forecasting, climate research, oil and gas exploration, molecular modelling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion).

Approaches to supercomputer architecture (Hill, Jouppi, & Sohi, 2000) have taken dramatic turns since the earliest systems were introduced in the 1960s. Early supercomputer architectures pioneered by Seymour Cray relied on compact innovative designs and local parallelism to achieve superior computational peak performance. However, in time the demand for increased computational power ushered in the age of massively parallel systems.

Key Terms in this Chapter

Parallel Programming: The serial programming paradigm involves a single processor that executes a program, a set of instructions defined by a programmer, in a serial fashion, one by one. Parallel programming paradigm is developed for a multi-processor computer (either multi-core single-physical processor, or massively parallel system with many processors), and assumes that a given problem can be divided into sub-tasks, which can then be executed in parallel, concurrently, with possible exchange of data during the execution. The programming that enables such type of parallel execution of instructions is usually called parallel programming.

Grid Computing: Is distributed computer systems, comprising of many geographically scattered computer resources, which are connected by high-speed network, and logically organized into a single system by a software layer, usually designated as middleware. Typical examples of Grids include large networks of computer clusters distributed over many institutions contributing computer resources. Most notable are the Grids operated by EGI (European Grid Infrastructure) and OSG (Open Science Grid in USA), used for scientific computing by researchers (primarily for particle physics applications, but the number of fields of science and user groups relying on these Grids is significantly increasing over the years).

High Performance Computing: Is use of large-scale computer clusters and supercomputers for numerical simulations that require significant computer resources for execution. Typically, this involves massively parallel numerical simulations, running on thousands of processors, and large amounts of memory available in a shared fashion (typical for mainframe supercomputers) or distributed among the computing nodes (typical for computer clusters).

Cloud Computing: Is a term that represents use of computer resources, including computing and data storage services, which are served over the real-time network, usually in a virtualized environment. The cloud provider manages large number of computers, and services are offered through virtualization of hardware, i.e. users deploy virtual machines on provider's hardware, thus instantiating the desired services. This model is known as Infrastructure as a Service (IaaS). In the model of Platform as a Service (PaaS), the provider offers a computing platform (operating system, compilers, databases, web server). Also widely use in business applications is Software as a Service (Saas), where the emphasis is on providing specific software and databases. Network as a Service (NaaS) is also offered for users requiring network connectivity services (including inter-cloud connectivity).

Supercomputer: Is a computer system capable of executing large number of operations, comprising a massive number of processors and large shared or distributed memory. The processors are running in parallel and are able to exchange the data, thus making possible to achieve high computing power, which is usually measured in the number of floating point operations being executed in one second. The required computer power for a system to be designated as a supercomputer is time-dependent, and is usually defined through a list of most powerful systems in the world, such as Top500 (top500.org), where only a limited number of systems (e.g. first 500) at any given time are considered to be supercomputers. As of 2013, petaflops-capable systems are classified as supercomputers, and in the coming years this will shift towards exaflop-capable systems.

Condensed Matter Physics: Is a branch of physics studying many-body systems in the condensed phase of matter, i.e. liquids (including quantum liquids) and solids (including crystallography and magnetism). The methods used include experiment, theory, and numerical simulations. Condensed matter physics applies ideas of quantum mechanics, quantum field theory, and statistical mechanics, and widely overlaps with materials science, nanotechnology, and chemistry.

Numerical Simulation: The detailed description of various physical, social, etc. systems is usually done by developing sophisticated models, which are then implemented through computer algorithms and (serial or parallel) programs. Execution of such programs on computers, with the aim of simulating real systems, is usually designated as numerical simulation.

Complete Chapter List

Search this Book:
Reset