Article Preview
Top1. Introduction
Parallel computing (PC) is the simultaneous execution of the same task on multiple processors in order to obtain results faster. The idea is based on the fact that the process of solving a problem usually can be divided into smaller tasks, which may be carried out simultaneously with some coordination. Parallel computing is a term usually used in the area of High Performance Computing (Liang et al., 2009; Bhadoria et al., 2016). It exclusively refers to performing computations or simulations using multiple processors. Supercomputers are designed to perform parallel computation. The main aim of PC is to maximize the speed of computation.
The Multiprocessor System (MTS) uses more than two CPUs within a single computer system. The main resource of MTS is to speedup applications by integrating parallelism among various processing elements. The MTS also refers to the ability of a system to support more than one processor or the ability to allocate tasks between them (Kettner et al., 2011). The multiprocessor system can be homogeneous and heterogeneous. The Homogeneous Multiprocessor System (HoMTS) refers to system that uses more than one type of processor or cores. These systems achieve performance or energy efficiency not just by adding the same type of processors, but by adding unlike coprocessors usually incorporating specialized processing capabilities to handle particular tasks (Shan et al., 2006). In HoMTSs, all processors identical in terms of their speed catch size and other all type of functionality. The Heterogeneous Multiprocessor System (HeMTS) consist of dissimilar processors in terms of capability of processor and all types of functionality but they are capable of performing the different type of tasks.
A MTS has multiple processing elements, multiple I/O tasks, and multiple memory tasks. Each processor can access any of the memory tasks and I/O units. The connectivity between these is performed by multiprocessor interconnection network. Thus, an interconnection network is used for exchanging data between two processors in a multistage network. Performance of MTS depends on how competently the concurrent processes are managed on the system. Multiprocessing is commonly known as the use of multiple independent processors within a single system. Since, PC is largely based upon multiprocessor interconnection networks (MINs). MIN has been usually accepted to be the most practical model of parallel computing (Schroeder et al., 2010). Interconnection networks are also called networks, communication subnets or subsystems. The interconnection of multiple networks is called internetworking. If more than one processor need for accessing memory, MINs are needed to route data from one processing element to another and processor to memory (Jamshed et al., 2013). The MIN can be broadly categorized into either direct or indirect. Direct Interconnection Network (DIN) consists of point-to-point communication links among processing processors that will not change once created. In other words, DIN forms all connections when the system is designed rather than when the connection is needed. In this network, messages must be routed along established links (Alam et al., 2015). The examples of DIN are Hypercube, Crossed Cube, Folded Crossed Cube (FCC), MetaCube (MC), X-Torus and many more. Indirect Interconnection Network (IIN) is built using switches and communication links are connected via switches to establish path among processing nodes and memory tasks. In other words, IIN established connections between two or more nodes on the fly as messages are routed along the links (Kulasinghe et al., 1995). The examples of IIN are Crossbar, Multistage, Multilevel and many more.