The Future of High-Performance Computing (HPC)

The Future of High-Performance Computing (HPC)

DOI: 10.4018/978-1-5225-7598-6.ch052
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

For decades, HPC has established itself as an essential tool for discoveries, innovations, and new insights in science, research and development, engineering, and business across a wide range of application areas in academia and industry. Today high-performance computing is also well recognized to be of strategic and economic value – HPC matters and is transforming industries. This chapter will discuss new emerging technologies that are being developed for all areas of HPC: compute/processing, memory and storage, interconnect fabric, I/O and software to address the ongoing challenges in HPC such as balanced architecture, energy efficient high-performance, density, reliability, sustainability, and last but not least, ease of use. Of specific interest are the challenges and opportunities for the next frontier in HPC envisioned around the 2020 timeframe: ExaFlops computing. The authors also outline the new and emerging area of high-performance data analytics, big data analytics using HPC, and discuss the emerging new delivery mechanism for HPC – HPC in the Cloud.
Chapter Preview
Top

Introduction

High-Performance Computing (HPC) is used to address and solve the world’s most complex computational problems. For decades, HPC has established itself as an essential tool for discoveries, innovations and new insights in science, research and development, engineering and business across a wide range of application areas in academia and industry. It has become an integral part of the scientific method – the third leg along with theory and experiment.

Today, High-Performance Computing is also well recognized to be of strategic and economic value – HPC matters and is transforming industries (Osseyran & Giles, 2015).

High Performance Computing enables scientists and engineers to solve complex and large science, engineering, and business problems using advanced algorithms and applications that require very high compute capabilities, fast memory and storage, high bandwidth and low latency throughput, high fidelity visualization, and enhanced networking.

Today, the IT industry is being transformed by cloud, big data, social media, artificial intelligence, and “Internet of Things” technologies and business models. All of these trends require advanced computational simulation models and powerful highly scalable systems. Hence, sophisticated HPC capabilities are critical to the organizations and companies that want to establish and enhance leadership positions in their respective areas.

Some industry verticals and application areas where HPC is used are as follows:

  • Manufacturing, Computer Aided Engineering (CAE)

  • Automotive Industry

  • Aerospace Industry

  • Weather Forecast and Climate Research

  • Energy, Oil & Gas Industry, Geophysics

  • Life-Science and Bio-Informatics (Genomics)

  • Government Research Laboratories

  • Universities (Academics), Machine Intelligence, Machine/Deep Learning, Artificial Intelligence (AI)

  • Astrophysics, High-Energy Physics, Computational Chemistry, Material Science

  • Financial Services Industry(FSI)

  • Digital Content Creation (DCC)

  • Defense

  • Security and Intelligence

Top

Background

After its initial years of proprietary computer systems in the 1970/1980’s, HPC has evolved with industry standards that democratized Supercomputing, making advanced computing available to more users and wider application segments.

Today’s modern HPC solutions are utilizing high-performance server compute nodes connected with high performance fabrics connected to high-performance storage systems, mainly deployed on distributed cluster architectures running on Linux based operating systems with up to tens of thousands of processors. For specific workloads and applications with the need for large coherent shared memory capacity (terabytes of data: TB), more specialized solutions and systems are used based on cc:NUMA (Cache-Coherent Non-Uniform Memory Access) architectures. For example, the SGI UV system supports up to 256 CPU sockets and up to 64TB of cache-coherent shared memory in a single system.

While in the past chip designs used to be limited by space and the number of transistors available, now power consumption is becoming the main constraint for High-Performance Computing. With several new emerging technologies there will be multiple opportunities to address some of the ongoing challenges in HPC such as balanced architectures, energy efficiency, density, reliability, resiliency, sustainability, and last but not least ease-of-use.

Complete Chapter List

Search this Book:
Reset