A Survey of FPGA Dynamic Reconfiguration Design Methodology and Applications

A Survey of FPGA Dynamic Reconfiguration Design Methodology and Applications

Ming Liu (Justus-Liebig-Universität Giessen, Germany and Royal Institute of Technology, Sweden), Zhonghai Lu (Royal Institute of Technology, Sweden), Wolfgang Kuehn (Justus-Liebig-Universität Giessen, Germany) and Axel Jantsch (Royal Institute of Technology, Sweden)
DOI: 10.4018/jertcs.2012040102
OnDemand PDF Download:
$37.50

Abstract

FPGA Dynamic Partial Reconfiguration (DPR or PR) technology has emerged and become gradually mature in the recent years. It provides the Time-Division Multiplexing (TDM) capability in utilizing on-chip resources and leads to significant benefits in comparison with conventional static designs. However, the partially reconfigurable design process features additional complexity and technical requirements to the FPGA developers. Hence, PR design approaches are being widely explored and investigated to systematize the development methodology and ease the designers. In this paper, the authors collect several research and engineering projects in this area and present a survey of the design methodology and applications of PR. Research aspects are discussed in various hardware/software layers.
Article Preview

1. Introduction

Programmable Logic Devices (PLD) especially Field-Programmable Gate Arrays (FPGA) were originally used as glue logic in the early period after its birth. Due to the capacity and clock frequency constraints at that time, they typically worked to bridge Application-Specific Integrated Circuit (ASIC) chips by adapting signal formats or conducting simple logic calculation. However at present, modern FPGAs have obtained enormous capacity and many advanced computation/communication features from the semiconductor process development; they can accommodate complete computer systems consisting of hardcore or softcore microprocessors (e.g., hard PowerPC and ARM cores, soft Microblaze and NIOS cores, etc.), memory controllers, customized hardware accelerators, as well as peripherals, etc. For instance, the most recently launched Xilinx Virtex-7 series FPGA features up to 1,995K logic cells, 68Mb on-chip Block RAMs, 3,600 DSP slices, 96 28.05Gbps transceivers, as well as 1,200 General-Purpose IOs (GPIO). Taking advantage of design IPs and interconnection architecture, it has become a reality to easily implement system-on-an-FPGA or Multi-Processor System-on-Chip (MP-SoC). As the continuous development on the capacity and work frequency, FPGAs are playing an increasingly important role in embedded systems designs. The FPGA market has hit about 3 and 4 billion US dollars respectively in 2009 and 2010, and is expected by Xilinx CEO Moshe Gavrielov to grow steadily to 4.5 billion by the end of 2012 and 6 billion by the end of 2015 (Dillien, 2009).

Reconfigurability denotes the capability of coarse-grained arrays (Plessl & Platzner, 2005) or fine-grained FPGAs, to change customized designs by loading different configware (Morra, 2006). A more advanced reconfigurable technology, so-called Partial Reconfiguration (PR), has been invented to enable the process of dynamically reconfiguring a particular section of an FPGA design while the remaining part is still operating. This vendor-dependent technology provides common benefits in adapting hardware modules during system run-time, sharing hardware resources to reduce device count and power consumption, shortening reconfiguration time, etc. (Kao, 2005; Mcdonald, 2008; Choi & Lee, 2006). Typically partial reconfiguration is achieved by loading the partial bitstream of a new design into the FPGA configuration memory and overwriting the current one. Thus the reconfigurable portion will change its behavior according to the newly loaded configuration.

Adaptive computing tailors various functional modules to ambient conditions during the system run-time. It intelligently manages on-chip computing resources and improves their utilization efficiency (Master, 2002). The self-awareness feature of adaptive computing distinguishes itself from existing computational models, which are mostly procedural and simply a collection of static functional components. Typically an adaptive system keeps aware of the context and changes its processing behavior according to trigger events such as workload variations, computation interest switching, or environmental situations. One major prerequisite of adaptive computing is the reprogrammability of the computer systems: On General-Purpose microprocessors (GPCPU), different computation tasks can be easily accomplished by conditionally branching to different instructions. Nevertheless hardware circuits are not straightforwardly adaptable in contrast to software computation, which intrinsically time-shares the computation resource of CPU cores. Conventionally various tasks are individually implemented as hardware modules. They are statically mapped on the FPGA chip, being instantiated throughout the system work time even though some of them may only occasionally operate or do not operate simultaneously at all. Now thanks to the PR technology, which provides much convenience in realizing adaptive computing scenarios. It maintains basic system functions while specific algorithms or algorithm steps are freely adjusted. It is the PR technology that firstly introduces the concept of Time-Division Multiplexing (TDM) into the FPGA resource management and leads to consequent benefits.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 8: 2 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing