Using High Performance Scientific Computing to Accelerate the Discovery and Design of Nuclear Power Applications

Using High Performance Scientific Computing to Accelerate the Discovery and Design of Nuclear Power Applications

Liviu Popa-Simil
Copyright: © 2015 |Pages: 30
DOI: 10.4018/978-1-4666-7461-5.ch005
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Present High Performance Scientific Computing (HPSC) systems are facing strong limitations when full integration from nano-materials to operational system is desired. The HPSC have to be upgraded from the actual designed exa-scale machines probably available after 2015 to even higher computer power and storage capability to yotta-scale in order to simulate systems from nano-scale up to macro scale as a way to greatly improve the safety and performances of the future advanced nuclear power structures. The road from the actual peta-scale systems to yotta-scale computers, which would barely be sufficient for current calculation needs, is difficult and requires new revolutionary ideas in HPSC, and probably the large-scale use of Quantum Supercomputers (QSC) that are now in the development stage.
Chapter Preview
Top

Available Architectures And Solution Approaches

In this chapter we will discuss and compare current supercomputers’ architecture and resource distribution solutions in order to classify them with respect to different performance evaluation parameters. We will also study the potential and suitability of existing approaches to co-exist with the classical infrastructure and use more the future distributed computing resources; more specifically, social networks on the cloud and cloud over social networks.

The development in the quantum computing and information teleportation will bring a new generation of supercomputers by two orders of magnitude faster and more compact, using complex quantum processors, that will open new horizons and will require changes in operation systems.

The 20-petaflops Titan supercomputer (olcf 2013) at Oak Ridge National Laboratory (Anthony, 2012), was world’s fastest supercomputer during 2012. Cray’s XC30 architecture, is supposed to allow the creation of supercomputers faster than 100 petaflops—100 quadrillion floating-point operations per second.

China is preparing the 100-petaflops, Tianhe-2 (Anthony, 2013) to be deployed by 2015, which in November 2013 become No.1 with only 33Pflops, and it was the successor to Tianhe-1A—a supercomputer that briefly held the title of World’s Fastest back in 2010 (a first for China).

Complete Chapter List

Search this Book:
Reset