A Load Balancing Strategy with Migration Cost for Independent Batch of Tasks (BoT) on Heterogeneous Multiprocessor Interconnection Networks

A Load Balancing Strategy with Migration Cost for Independent Batch of Tasks (BoT) on Heterogeneous Multiprocessor Interconnection Networks

Mahfooz Alam (Department of Computer Science, Al-Barkaat College of Graduate Studies, Aligarh, India) and Mohammad Shahid (Department of Commerce, Aligarh Muslim University, Aligarh, India)
Copyright: © 2017 |Pages: 19
DOI: 10.4018/IJAEC.2017070104
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

In high performance computing, heterogeneous Multiprocessor Interconnection Networks (MINs) are used for processing of compute intensive applications. These applications are distributed on the heterogeneous computational processors of MINs arranged in specific geometrical shape. MINs are also used for transfer task between two processors in a heterogeneous multistage network for better load balancing. Load balancing algorithm plays a vital role in interconnection network in order to minimize the load imbalance on the processors. In this paper, a Load Balancing Strategy with Migration cost (LBSM) is proposed to execute an independent batch of tasks on various heterogeneous MINs viz. MetaCube, X-Torus and Folded Crossed Cube having the objective of minimizing the load imbalance on processors. In simulation study, LBSM is compared with its previous work DLBS and superior performance is shown with the considered parameters under study. Further, the performance analysis of LBSM has been conducted on MetaCube, X-Torus and Folded Crossed Cube and results have been reported accordingly.
Article Preview

1. Introduction

Parallel computing (PC) is the simultaneous execution of the same task on multiple processors in order to obtain results faster. The idea is based on the fact that the process of solving a problem usually can be divided into smaller tasks, which may be carried out simultaneously with some coordination. Parallel computing is a term usually used in the area of High Performance Computing (Liang et al., 2009; Bhadoria et al., 2016). It exclusively refers to performing computations or simulations using multiple processors. Supercomputers are designed to perform parallel computation. The main aim of PC is to maximize the speed of computation.

The Multiprocessor System (MTS) uses more than two CPUs within a single computer system. The main resource of MTS is to speedup applications by integrating parallelism among various processing elements. The MTS also refers to the ability of a system to support more than one processor or the ability to allocate tasks between them (Kettner et al., 2011). The multiprocessor system can be homogeneous and heterogeneous. The Homogeneous Multiprocessor System (HoMTS) refers to system that uses more than one type of processor or cores. These systems achieve performance or energy efficiency not just by adding the same type of processors, but by adding unlike coprocessors usually incorporating specialized processing capabilities to handle particular tasks (Shan et al., 2006). In HoMTSs, all processors identical in terms of their speed catch size and other all type of functionality. The Heterogeneous Multiprocessor System (HeMTS) consist of dissimilar processors in terms of capability of processor and all types of functionality but they are capable of performing the different type of tasks.

A MTS has multiple processing elements, multiple I/O tasks, and multiple memory tasks. Each processor can access any of the memory tasks and I/O units. The connectivity between these is performed by multiprocessor interconnection network. Thus, an interconnection network is used for exchanging data between two processors in a multistage network. Performance of MTS depends on how competently the concurrent processes are managed on the system. Multiprocessing is commonly known as the use of multiple independent processors within a single system. Since, PC is largely based upon multiprocessor interconnection networks (MINs). MIN has been usually accepted to be the most practical model of parallel computing (Schroeder et al., 2010). Interconnection networks are also called networks, communication subnets or subsystems. The interconnection of multiple networks is called internetworking. If more than one processor need for accessing memory, MINs are needed to route data from one processing element to another and processor to memory (Jamshed et al., 2013). The MIN can be broadly categorized into either direct or indirect. Direct Interconnection Network (DIN) consists of point-to-point communication links among processing processors that will not change once created. In other words, DIN forms all connections when the system is designed rather than when the connection is needed. In this network, messages must be routed along established links (Alam et al., 2015). The examples of DIN are Hypercube, Crossed Cube, Folded Crossed Cube (FCC), MetaCube (MC), X-Torus and many more. Indirect Interconnection Network (IIN) is built using switches and communication links are connected via switches to establish path among processing nodes and memory tasks. In other words, IIN established connections between two or more nodes on the fly as messages are routed along the links (Kulasinghe et al., 1995). The examples of IIN are Crossbar, Multistage, Multilevel and many more.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 8: 4 Issues (2017): 3 Released, 1 Forthcoming
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing