FPGA Memory Optimization in High-Level Synthesis

FPGA Memory Optimization in High-Level Synthesis

Mingjie Lin, Juan Escobedo
Copyright: © 2020 |Pages: 31
DOI: 10.4018/978-1-5225-9806-0.ch003
Chapter PDF Download
Open access chapters are freely available for download

Abstract

High-level synthesis (HLS) with FPGA can achieve significant performance improvements through effective memory partitioning and meticulous data reuse. In this chapter, the authors will first explore techniques that have been adopted directly from systems that possess a fixed memory subsystem such as CPUs and GPUs (Section 2). Section 3 will focus on techniques that have been developed specifically for reconfigurable architectures which generate custom memory subsystems to take advantage of the peculiarities of a family of affine code called stencil code. The authors will focus on techniques that exploit memory banking to allow for parallel, conflict-free memory accesses in Section 3.1 and techniques that generate an optimal memory micro-architecture for data reuse in Section 3.2. Finally, Section 4 will explore the technique handling code still belonging to the affine family but the relative distance between the addresses.
Chapter Preview
Top

Introduction

Despite all the attention processing power gets, the fact is it can be wasted if the processor does not have enough data to crunch. If the memory system cannot provide data fast enough to the processing cores to keep them busy, then most of the time those extremely powerful units will be idle which translates to underutilization of resources and energy waste.

Because of that reason memory subsystems have been the focus of extensive research trying to find smarter ways to take advantage of the available bandwidth and avoid problems related to the latency of the accesses which are the most common reasons for the bottlenecks in modern computer systems. The roof-line model (Williams et al, 2009) seeks to describe the behavior of a system with a particular memory bandwidth under certain computational load.

Figure 1.

Roofline model. O1 is a computation bounded by the available bandwidth while O2 is bounded by processing power.

978-1-5225-9806-0.ch003.f01

The model shown in figure 1, is a graph where the X axis measures the operational intensity, in operations per byte. This is, how many arithmetic operations, usually floating-point operations or FLOPS, can be performed per byte of. On the Y axis we see the bandwidth of the system, measured in bytes/sec or a multiple of it. We A say the system is memory bounded when increasing the available bandwidth will increase the number of arithmetic operations completed per second. On the other hand, more complex operations such as square root require several clock cycles to complete after we have fetched the data it needs. In that case, we say have an operation that is computationally bounded because an increase in bandwidth does not change the number of arithmetic operations we can complete per second. This is the desired point of operation since processing powers is orders of magnitude faster than memory accesses. A modern Intel Core i7 processor can demand up to 409.6 GB/s while even most modern DDR4 based system can only provide ≈9% of the maximum bandwidth that a processor demands (Gaur et al, 2017).

Adapting the roofline model to reconfigurable architectures and High -Level Synthesis to take advantage of its expressiveness and determine the source of the bottleneck has proven difficult because unlike in traditional architectures, one can instantiate more processing elements, effectively increasing the computational power of the device for the specific task. One proposed model also includes a measurement of how many resources would additional processing elements would utilize over the total available and include that into the Computational Performance (da Silva et al, 2013).

Solutions to overcome the memory bandwidth problem usually involve taking advantage of size-speed trade off some types of memories. One of the most widely implemented techniques is known as caching. Caching uses a type of memory called cache, and takes advantage of the fact most code reuses data that either has been used recently or is ”free” to access since it is already on cache because it was brought on a recent batch. Once data is brought on the cache memory, not only does it sit physically closer to the processing unit, but the memory itself is constructed in such a way that acceding it is orders of magnitude faster than accessing RAM. Computing units or cores can have several levels of this type of memory, each one being slower than the one above, but with more capacity. To fully take advantage of this architecture, complex prediction algorithms are implemented to determine which data is the best to be evicted from cache to make room to newer one as the code executes. Modern prediction schemes can determine with effectiveness of over 90% (Hennessy et al, 2012) what the best eviction policy for certain program is. But even with this, the bandwidth gap is too wide to solve the problem at hand. Some researchers have even considered taking advantage of the comparatively smaller, but available, bandwidth of RAM memory to create a compound channel that takes advantage of all the available bandwidth instead of relying on the fast cache memory alone (Gaur et al, 2017).

Complete Chapter List

Search this Book:
Reset