Designing of High Performance Multicore Processor with Improved Cache Configuration and Interconnect

Designing of High Performance Multicore Processor with Improved Cache Configuration and Interconnect

Ram Prasad Mohanty (National University of Singapore, Singapore), Ashok Kumar Turuk (National Institute of Technology, India) and Bibhudatta Sahoo (National Institute of Technology, India)
DOI: 10.4018/978-1-4666-8853-7.ch009
OnDemand PDF Download:
$37.50

Abstract

The growing number of cores increases the demand for a powerful memory subsystem which leads to enhancement in the size of caches in multicore processors. Caches are responsible for giving processing elements a faster, higher bandwidth local memory to work with. In this chapter, an attempt has been made to analyze the impact of cache size on performance of Multi-core processors by varying L1 and L2 cache size on the multicore processor with internal network (MPIN) referenced from NIAGRA architecture. As the number of core's increases, traditional on-chip interconnects like bus and crossbar proves to be low in efficiency as well as suffer from poor scalability. In order to overcome the scalability and efficiency issues in these conventional interconnect, ring based design has been proposed. The effect of interconnect on the performance of multicore processors has been analyzed and a novel scalable on-chip interconnection mechanism (INOC) for multicore processors has been proposed. The benchmark results are presented by using a full system simulator. Results show that, using the proposed INoC, compared with the MPIN; the execution time are significantly reduced.
Chapter Preview
Top

Introduction

A multicore processor is capable of executing multi-threaded applications faster as compared to multiprocessor system which consists of multiple single core processors. The reason is that the processors are able to communicate faster with multicore processor because of the short distance between them. It is also cheaper to have a multicore processor than to have multiple coupled single core processor (Aggarwal et al., 2007). This enhancement has led to the concept of network-on-chip (NoC). Before this concept, system-on-chip (SoC) took the aid of complex traditional interconnects like bus structures in connection between the cores to memory and I/O. Traditional bus structures were improved, to be used as interconnect in the multicore processors. However, with enhancement in the number of cores, use of bus as interconnect proved inefficient with increasing complexity. Moreover, the bus does not scale well with an increase in the number of cores. To address the scalability problem in multicore, NoC is used (Akhter & Roberts, 2006) and (Balakrishnan et al., 2005). The growing number of cores increases the demand for a powerful memory subsystem. Importance of caches have enhanced in multicore processors. They are responsible in giving processing elements a faster, higher bandwidth local memory to work with. This becomes very crucial for more number of cores are trying to access relatively slower and lower bandwidth off-chip memory (Beavers, 2009). It leads to proposal of improved memory subsystems or cache configurations for multicore processors. In this chapter, an attempt has been made to analyze the impact of cache size on performance of multicore processors by varying L1 and L2 cache size. A novel cache configuration for enhanced performance in multicore processors has also been proposed with a multi - core processor with internal network architecture. Here, we analyze the effect of interconnections on the multicore processors and have proposed a novel, highly scalable, on-chip interconnection mechanism for multicore processors. A major goal of any processor design is to reduce the cache access time, which highly impacts the performance of a processor. Researchers have considered the cache organization to design high performance multicore processor (Creeger, 2005) and (Beavers, 2009). The current advancements in cache memory subsystems includes additional levels of cache and enhancement in cache size in multicore processors to enhance performance. On-chip interconnects is an integrated chip that is used as a communication subsystem among various components for multicore processor. Recent research depicts that on-board component like cache memory holds significant impact on the performance of multicore systems (Bhowmik & Govindaraju, 2012). Interconnection among the cores in a multicore processor has posed a great challenge. With an increase in the number of cores, the traditional on-chip interconnect like a bus and crossbar has proved to be less efficient. Moreover, they also suffer from poor scalability. Also with the steady growth in the number of cores, the ring based interconnect has become infeasible. This necessitates, the designer to look for a novel way of interconnect among the cores without degrading the efficiency and scalability. Two major issues addressed in the chapter are

  • The effect of cache size on the performance of multicore processors using different performance parameters, and suggest a cache configuration for performance enhancement in multicore processors.

  • A scalable interconnection mechanism for multi - core processors, and compare it with existing multicore architecture.

In this chapter, the impact of cache size, associativity and interconnection mechanism on the performance of multicore processors has been analyzed. An optimized cache configuration (Bienia at al., 2008) for performance optimization in general purpose multicore processors, and multicore mobile processor, has been proposed. An attempt was made to handle the communication delay and scalability issue of multicore processor with the proposal of a novel interconnection mechanism termed as Interconnection Network-on-Chip (INoC). This chapter discusses designing of high performance multicore processor with improved cache configurations and interconnect.

Complete Chapter List

Search this Book:
Reset