Applications and Current Challenges of Supercomputing across Multiple Domains of Computational Sciences

Applications and Current Challenges of Supercomputing across Multiple Domains of Computational Sciences

Neha Gupta
Copyright: © 2015 |Pages: 26
DOI: 10.4018/978-1-4666-7461-5.ch003
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Supercomputing is a contemporary solution to enhance the speed of calculations in nanoseconds. Presently, there are different aspects of supercomputing like Cloud Computing, High Performance Computing, Grid Computing, etc. provided by companies like Amazon (Amazon Web Services), Windows (Azure), Google (Google Cloud Platform). Supercomputers play an important role in the field of Computer Science and are used for a wide range of computationally intensive tasks across domains like Bioinformatics, Computational Earth and Atmospheric Sciences, Computational Materials Sciences and Engineering, Computational Chemistry, Computational Fluid Dynamics, Computational Physics, Computational and Data Enabled Social Sciences, Aerospace, Manufacturing, Industrial Applications, Computational Medicine, and Biomedical Engineering. However, there are a lot of issues that need to be solved to develop next generation supercomputers. In this chapter, the potential applications and current challenges of supercomputing across these domains are explained in detail. The current status of supercomputing and limitations are discussed, which forms the basis for future work in these areas. The future ideas that can be applied efficiently with the availability of good computing resources are explained coherently in this chapter.
Chapter Preview
Top

Introduction

Due to remarkable advances in computer technology, scientists are looking for better digital supercomputing tools to deal with the complexities of the datasets. Supercomputers are the fastest computers we know of. They are characterized by a very high computational speed and an immense number of processors (Chinta, 2013). They are usually seen in corporations, etc filling a big room. The speed of supercomputers are measured in FLOPS (Chinta, 2013). Simply put, Floating point operations means computations that involve very large decimal numbers, usually 300 digits in a single number. The ten fastest supercomputers in the world are Titan, Sequoia, K Computer, Mira, JUQueen, SuperMuc, Stampede, Tianhe-1A, Fermi, Darpa Trial Subset (Chinta, 2013).

The first supercomputer was introduced by Seymour Cray in the early 70’s. Supercomputers have wide range of applications like constructing weather maps, construction of nuclear weapons, atom bombs, finding oil, earthquake prediction, etc. They are also used in space exploration, environmental simulations or global warming effects, mathematics, physics, medicine, etc. The contemporary supercomputer is a high performance cluster with a tightly-coupled high-speed interconnect that uses parallel applications. Supercomputing is currently in the middle of large technological, architectural and application changes that greatly impact the way programmers think about the system. Computational methods for solving the problems has become very important in many scientific and engineering areas where the calculations becomes limiting. Supercomputers can help address these problems provided they are developed with great functional architectures.

One of the alternatives in supercomputing is GPU. GPUs are doing well and have the sole dominance in the markets. GPUs are not a clear option though (Fielden, 2013). Working with GPU’s can be trickier than that of CPUs. It needs modern software code to port to GPUs and additionally more time and money (Fielden, 2013). In data-intensive industries like life sciences, manufacturing, earth sciences, materials sciences, etc the volume and the speed of streaming data that must be analyzed are pushing the boundaries of hardware capabilities. It is very essential at the moment to bring the power of cutting edge supercomputing technologies to the toughest data challenges that these industries face every day.

Most supercomputers are clusters of MIMD multiprocessors, each processor of which is SIMD. A SIMD processor executes the same instruction on more than one set of data at the same time. MIMD is employed to achieve parallelism, by using a number of processors that function asynchronously and independently. Currently, the data is growing at a very rapid rate but most of the data is stored and have not been used to extract the meaningful information. So, there is an eminent need for developing proper mechanisms of processing these large datasets to extract useful knowledge for better decision making.

In recent years, supercomputers have become essential tools for scientists and engineers who has to quickly manipulate large amounts of data. Next to supercomputers in speed and size are mini-supercomputers. Apart from Mainframe computers and Supercomputers, IBM is doing research in a new stream called quantum computing which is believed to be faster than supercomputing. This computing uses computers whose transistors are so small and the computer is working with atoms and molecules (Mainframes and Supercomputers, 2012). A quantum computer would be capable of solving millions of calculations at once- and able to crack any computer code on Earth (Mainframes and Supercomputers, 2012).

Complete Chapter List

Search this Book:
Reset