The Future of Supercomputers and High-Performance Computing

The Future of Supercomputers and High-Performance Computing

Domen Verber
DOI: 10.4018/978-1-4666-7377-9.ch010
OnDemand:
(Individual Chapters)
Available
$29.50
No Current Special Offers
TOTAL SAVINGS: $29.50

Abstract

A state-of-the-art and a possible future of High Performance Computing (HPC) are discussed. The steady advances in hardware have resulted in increasingly more powerful computers. Some HPC applications that were years ago only in the domain of supercomputers can nowadays be executed on desktop and mobile computers. Furthermore, the future of computers is in the “Internet-of-things” and cyber-physical systems. There, computers are embedded into the devices such as cars, house appliances, production lines, into our clothing, etc. They are interconnected with each other and they may cooperate. Based on that, a new kind of application emerges, which requires the HPC architectures and development techniques. The primary focus of the chapter is on different hardware architectures for HPC and some particularities of HPC programming. Some alternatives to traditional computational models are given. At the end, some replacements for semiconductor technologies of modern computers are debated.
Chapter Preview
Top

Introduction

High-performance computing (HPC) is the use of computers and parallel processing techniques for solving complex computational problems. HPC is used in a wide variety of fields, from engineering (e.g. in the automotive industry for complex crash simulations), bioinformatics (e.g. for protein folding), ecology (e.g. complex Earth systems modeling), etc. Another important application for HPC today is co-called Big Data. The Big Data, in general, refers to the problems where enormous amount of data must be processed and analyzed in a short time.

The high-end HPC is performed on super computers. Those are computers build as a cluster of thousand or even millions of processing elements. They are specially designed to solve specific problems, where usually the same basic computation is repeated on million pieces of data. There, a speedup is achieved by executing the same instructions on multiple data at the same time. This approach is referred to as single instruction multiple data (SIMD), in contrast to the single instruction single data (SISD) approach used by general-purpose central processing units (CPUs). The supercomputers are enormous, expensive and have a huge power consumption. Therefore, only the biggest corporations can afford them. There is an official web site called Top500.org (Top500, 2014), where a list of five hundred top-ranked supercomputers is maintained. The list is updated every six months. In Table 1, the top 10 computers from this list are presented. The Rmax factor represents the speed of the supercomputers. This is the highest score measured using the LINPACK benchmark suite and it is expressed in Pflops (number of 1015 floating-point instructions executed per seconds). In the column beside is the power consumption expressed in mega watts.

Table 1.
Top 10 supercomputers, as of June 2014 (from Top500, 2014)
NameManufacturer
Country
Processor
Accelerator
Total CoresAccel. CoresRmax
(Pflops)
Power(MW)
Tianhe-2 (MilkyWay-2)NUDT
China
Intel Xeon E5-2692v2
Intel Xeon Phi 31S1P
3,120,000
2,736,000
33.8618
TitanCray Inc.
United States
Opteron 6274
NVIDIA K20x
560,640
261,632
17.598
SequoiaIBM
United States
Power BQC
None
1,572,864
0
17.178
K computerFujitsu
Japan
SPARC64 VIIIfx
None
705,024
0
10.5113
MiraIBM
United States
Power BQC
None
786,432
0
8.584
Piz DaintCray Inc.
Switzerland
Xeon E5-2670
NVIDIA K20x
115,984
73,808
6.272
StampedeDell
United States
Xeon E5-2680
Intel Xeon Phi SE10P
462,462
366,366
5.175
JUQUEENIBM
Germany
Power BQC
None
458,752
0
5.012
VulcanIBM
United States
Power BQC
None
393,216
0
4.292
-Cray Inc.
United States
Intel Xeon E5-2697v2
None
225,984
0
3.14-

Key Terms in this Chapter

Cognitive Computing: A form of computing that mimic the cognitive capabilities of human brains.

Multi-Processor Architectures: A computer architecture that contains several CPUs. CPUs are interconnected and may cooperate with each other.

Internet-Of-Things: A conceptual model in which objects, animals or people posses some sort of embedded computer, which have ability to communicate and cooperate with each other over a network.

CPU: Central Processing Unit. A part of traditional computer architecture that controls and performs the execution of program applications.

DNA Computing: A form of computing which based on biochemistry and molecular biology of living cells. The DNA plays a role of computer program, which is executed by the organelles in the cell.

GPU: Graphical processing unit. A specialized processing unit used for fast rendering of images on the computer display. GPU usually consists of several hundred processing elements. GPUs can also be utilized for general computing.

Multi-Core Processors: An implementation of multi-processor architectures with several CPUs put on the same silicon die.

High Performance Computing: A set of technologies in computer sciences that delivers much higher performance than one could get out of a typical computer in order to solve large problems in science, engineering, or business.

Quantum Computing: A computing paradigm based on the laws of quantum physic. Quantum computers will be able to solve certain problems much quicker than ordinary digital computers.

Embedded Computer System: A computer system in which the computer is encapsulated into the device it controls. It is usually dedicate to a specific task.

Complete Chapter List

Search this Book:
Reset