Receive a 20% Discount on All Purchases Directly Through IGI Global's Online Bookstore.

Additionally, libraries can receive an extra 5% discount. Learn More

Additionally, libraries can receive an extra 5% discount. Learn More

V.E. Malyshkin (Russian Academy of Sciences, Russia)

DOI: 10.4018/978-1-60566-661-7.ch013

Chapter Preview

TopParallel implementation of realistic numerical models, using direct numerical modeling of a physical phenomenon on the basis of description of the phenomenon behaviour in the local area, usually requires high performance computations. However the algorithms of these models based on the regular data structures (like rectangular mesh) are also remarkable for irregularity and even dynamically changing irregularity of the data structure (adoptive mesh, variable time step, particles, etc.). For example, in the PIC method the test particles are in the bottom of such an irregularity. Hence, these models are very difficult for effective parallelization and high performance implementation with conventional programming languages and systems.

The Assembly Technology (AT) (Kraeva & Malyshkin, 1997), (Kraeva & Malyshkin, 1999), (Valkovskii, Malyskin, 1988) has been especially created in order to support the development of fragmented parallel programs for multicomputers. Fragmentation and dynamic load balancing are the key features of programming and program execution under AT. The application of the AT to implementation of the large scale numerical models is demonstrated on the example of parallel implementation of the PIC method (Berezin & Vshivkov, 1980), (Hockney & Eastwood, 1981), (Kraeva & Malyshkin, 2001) application to solution of the problem of energy exchange into plasma cloud.

Actually AT integrates such well known programming techniques as modular programming and domain decomposition in order to provide suitable technology for the development of parallel programs implementing large scale numerical models. AT supports exactly the process of the whole program assembling out of atomic fragments of computation.

The process of new knowledge extraction consisted of two major components. First, new fact is found in real physical (chemical, …) experiments. After that a theory is constructed that should explain new fact and predict unknown facts. The theory serves to the science until some new not explainable fact is found. This is long time and resources consumable process. Real experiments are often very expensive, original equipment for such experiments are prepared long time, and go on. Now the third component is added to scientific process. Numerical simulation of the natural phenomena on supercomputers is now used in order to test the developed theory in numerical experiments, not in real physical experiments. Also such numerical experiments often help to form the new real experiment if necessary. Sometimes the parameters of a physical system can not be measured, for example, the processes in plasma or inside the sun. In these cases only numerical simulation can find some arguments in order to support or to reject the theory.

In comparison to real experiments, numerical experiments consume far less resources and can be organized very quickly. Therefore, the investigations of the phenomenon can be done more quickly and the phenomenon can be studied more carefully in numerous experiments. It is no wonder that the modern supercomputers are mostly loaded by the large scale numerical simulation (Kedrinskii, Vshivkov, Dudnikov, Shokin & Lazareva, 2004), (Kuksheva, Malyshkin, Nikitin, Snytnikov, Snytnikov, & Vshivkov, 2005).

Unfortunately, the development of parallel programs is very difficult problem. Earlier, sequential programming languages and systems provided for numerical mathematicians the possibility to program their numerical models more or less well without any assistance from the professional programmers. It was their “private technology” (Malyshkin, 2006) of programming. Now another situation is. Development of parallel programs is far more difficult and labor consumed work. Additionally, parallel programs are very sensible to any errors, to any not optimal design decisions and/or inefficiency in programming. As result numerical mathematicians are now unable to develop parallel programs implementing their numerical models without assistance from the professional programmers.

The technology of parallel programs of numerical modeling assembling out of ready made atomic fragments is suggested as “private technology” of programming for numerical mathematicians that are often working with restricted number of numerical method and algorithms. The AT is demonstrated on PIC implementation (algorithms parallelization/fragmentation and program construction).

Cluster: A multicomputer with the tree structure of communication net.

Multicomputer: A set of computers connected by the communication net and able with the use of special system software to solve jointly the same application problem. Well known examples of multicomputer’s communication net are rectangular mesh, tree, torus, hypercube.

Parallel Programming: The development of programs able to be executed on multicomputers.

Dynamic Load Balancing: Equalizing of workload of multicomputer processor elements in the course of a program execution in order to reach better multicomputer performance.

Assembly Technology: Technology of parallel programs development for large scale numerical simulation based on assembling of the whole computation out of atomic fragments of computation. The technology integrates well known techniques of modular programming and domain decomposition and is supported by the system software.

Dynamic Tunability of a Program to All the Available Resources: A program should be able to use all the available resources of a multicomputer.

Particle-In-Cell Method: Widely used numerical method for direct simulation of natural phenomena where the material is represented by the huge number of test particles. Instead of solution of the system of partial differential equations in the 6D space of co-ordinates and velocities, the dynamics of a simulated phenomenon is determined by integrating the equations of motion of every particle in the series of discrete time steps. The method began to be applicable with the use of supercomputers only.

Search this Book:

Reset

Copyright © 1988-2018, IGI Global - All Rights Reserved