Hardware Trends and Implications for Programming Models

Hardware Trends and Implications for Programming Models

Gabriele Jost (The University of Texas at Austin, USA) and Alice E. Koniges (Lawrence Berkeley National Laboratory, USA)
DOI: 10.4018/978-1-61350-116-0.ch001
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

The upcoming years bring new challenges in high-performance computing (HPC) technology. Fundamental changes in the building blocks of HPC hardware are forcing corresponding changes in programming models to effectively use these new architectures. The changes in store for HPC will rival the vector to massively parallel transition that scientific and engineering codes and methodologies endured several years ago. We describe some of the upcoming trends in hardware designs, and suggest ways in which software and programming models will advance accordingly.
Chapter Preview
Top

Background

Exascale computation (i.e., at a rate exceeding 1018 operations per second) by 2018 has been identified as a challenging but attainable for the future of scientific and engineering computation, that will lead to numerous advances in fundamental science. Quoting from “Report on Exascale Computing,” (ASCAC (2010)), ’Going to the exascale’ will mean a radical change in computing architecture – basically, vastly increasing the levels of parallelism to the point of millions of processors working in tandem – which will force radical changes in how hardware is designed (at minimum, driven by economic limitations on power consumption), in how we go about solving problems (e.g., the application codes), and in how we marry application codes to the underlying hardware (e.g., the compilers, I/O, middleware, and related software tools). On the brink of the exascale era, we are already seeing new hardware trends and corresponding advances in programming models. In this chapter we describe some of the recent trends and the programming models that are evolving to fit the new hardware.

Key Terms in this Chapter

MPI: Message Passing Interface

NPB-MZ: NAS Parallel Benchmarks Multizone Version

NPB: NAS Parallel Benchmarks.

API: Application Programming Interface

UPC: Unified Parallel C

TBB: Thread Building Blocks

ccNUMA: Cache-coherent non-uniform memory access

SMP node: one or more sockets interconnected such that all cores on all sockets have access to each other memory modules

Socket: Motherboard with one or more cores

SMP node: one or more sockets interconnected such that all cores on all sockets have access to each other memory modules

GPGPU: General Purpose Graphical Processing Unit

HPF: High Performance Fortran

PGAS: Partitioned Global Address Space

NUMA: non-uniform memory access

HPC: High Performance Computing

FPGA: Field Programmable Gate Array Processor

FFT: Fast Fourier Transformation

SPMD: Single Program Multiple Data

COTS: Commodity-off-the-shelf

GAS: Global Address Space GAS: Global Address Space

CAF: Co-Array Fortran

Socket: Motherboard with one or more cores

Core: Processing unit on a chip

SMP: shared memory processor, formerly used for symmetric multiprocessor

MPP: Massively parallel processing

SMP: shared memory processor, formerly used for symmetric multiprocessor

CFD: Computational Fluid Dynamics

Complete Chapter List

Search this Book:
Reset