Parallelization and Load Balancing Techniques for HPC

Parallelization and Load Balancing Techniques for HPC

Siddhartha Khaitan, James D. McCalley
Copyright: © 2014 |Pages: 7
DOI: 10.4018/978-1-4666-5202-6.ch159
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Chapter Preview

Top

Introduction

As multicore systems become ubiquitous in desktop, and in mobile and embedded systems, interest in high performance computing (HPC) techniques has increased. Further, several computation intensive tasks demand use of high performance computing resources (Raju et al., 2009, Pande et al., 2009, Varré et al., 2011, Gupta et al., 2008) since sequential computing platforms are proving to be incapable of fulfilling the computational demands in these domains. Hence, researchers are using parallelization techniques. However, parallelization also brings the need of achieving load-balancing, since an unbalanced load distribution is likely to lead to wastage of processors and increased total completion time.

In this chapter, we discuss three different scheduling techniques which are used for achieving load-balancing. These techniques are static scheduling, master-slave scheduling and work-stealing. To show a concrete example of parallelization approach, we show the example of multi-threading in Java (Arnold et al., 2000). We discuss the relative advantages and disadvantages of multi-threading implementation in Java.

Key Terms in this Chapter

Master-Slave Scheduling (MSS): MSS refers to the dynamic scheduling technique where a processor, called master is used to schedule tasks on the remaining processors, called slaves.

Time-Slicing: Time-slicing refers to the time-multiplexed execution of different processes by a processor.

Work-Stealing Scheduling (WSS): WSS refers to the dynamic scheduling technique where a processor which has finished its tasks are allowed to steal a task from another processor with excess tasks. It is also known as task-stealing scheduling.

Race Condition: It refers to the case when multiple parallel processes access and modify a shared resource in such a way that the result is unexpected.

High Performance Computing (HPC): Use of parallelization techniques for achieving high performance.

Amdahl’s Law: Amdahl's law states that the maximum speedup achievable through parallelization of a program is limited by the value that is inversely proportional to the fraction of the program that has to be executed serially.

Parallel Programming: Programming with multiple computational processes which run in parallel and may interact with each other.

Load-Balancing: Assignment of workload on different processes such that each process received almost equal computational load.

Synchronization Primitives: Synchronization primitives refer to simple software mechanisms provided by the operating system for the purpose of supporting process synchronization and avoiding deadlocks.

Complete Chapter List

Search this Book:
Reset