Optimizing Techniques for OpenCL Programs on Heterogeneous Platforms

Optimizing Techniques for OpenCL Programs on Heterogeneous Platforms

Slo-Li Chu, Chih-Chieh Hsiao
Copyright: © 2012 |Pages: 15
DOI: 10.4018/jghpc.2012070103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Heterogeneous platforms that are consisted of CPU and add-on streaming processors are widely used in modern computer systems. These add-on processors provide substantially more computation capability and memory bandwidth than conventional multi-cores platforms. General-purpose computations can also be leveraged onto these add-on processors. In order to utilize their potential performance, programming these streaming processors is challenging because of their diverse underlying architectural characteristics. Several optimization techniques are applied on OpenCL-compatible heterogeneous platforms to achieve thread-level, data-level, and instruction-level parallelism. The architectural implications of these techniques and optimization principles are discussed. Finally, a case study of MRI-Q benchmark will be addressed to illustrate to capabilities of these optimization techniques. The experimental results reveal the speedup from non-optimized to optimized kernel can vary from 8 to 63 on different target platforms.
Article Preview
Top

1. Introduction

Continuously growing of semiconductor technology makes the multiple cores into a single chip to advance the performance of processors. Moreover, new type of accelerator, e. g. Graphics Processing Units (GPU), IBM Cell Processor, provides multiple of computing power of state-of-the-art multicore CPUs. These accelerators are diverse and heterogeneous, consisting of VLIW, multithreaded, SIMD processing cores, complex memory hierarchies, and accessing mechanisms. To utilize the high performance potential of these accelerators, special programming techniques are required to overcome the parallelizing challenge of these heterogeneous architectures.

In the past, writing parallel programs for these high-performance heterogeneous computer systems required familiarity with graphics APIs or vendor-specific APIs. These APIs and programming paradigms are extremely difficult to implement. The most popular parallel programming paradigms, such as OpenMP and MPI, are unsuitable for these heterogeneous platforms. Also, the GPU vendor-provided paradigms such as CUDA and CAL cannot be migrated to different platforms and compilation environments. Accordingly, OpenCL (Khronos, 2009) standard has been established. The program written by OpenCL can easily migrate between diverse architectures without program modification.

Although OpenCL is computationally powerful and compatible with different platforms, fully utilizing OpenCL devices requires careful tuning of computing kernels. Figure 1 compares speedups between optimized and naïvely non-optimized OpenCL MRI-Q (IMPACT, 2007) kernels on five different GPUs. The significant speedups from non-optimized to optimized versions are implicated the importance of carving the optimization space of OpenCL programs on GPUs and accelerators.

Figure 1.

The performance difference between optimized and non-optimized OpenCL “MRI-Q” kernels

jghpc.2012070103.f01

Accordingly, this study presents several workloads with OpenCL and discusses the architectural implications of the underlying hardware. Several optimization techniques for OpenCL devices, includes Massive Multithreading, Vectorization, Tiling, and Privatization are discussed. The contributions of this paper are: i) analysis optimizations across platforms with its architectural characteristics, ii) provides an optimization guideline for programmer to deal with different hardware devices, iii) two different approaches on vectorization to increase utilization of hardware resources.

This paper is organized as follows. Section 2 presents the OpenCL parallel programming paradigm. Section 3 discusses the architectural of OpenCL compatible computing devices. Section 4 describes optimization techniques for overcoming architectural limitations and optimization guidelines. The case study with various optimizations on MRI-Q will be addressed in Section 5. Finally, related work and conclusions are given in Sections 6 and 7, respectively.

Top

2. Opencl Programming Paradigms

OpenCL is an open standard for cross-platform, parallel programming in modern heterogeneous platforms. The purpose of OpenCL is to provide a compatible code for different devices, architectures, and applications. Therefore, CPUs, GPUs, and other processors such as DSPs can be used to accelerate computationally intensive or parallel data applications. The host, which may be a personal computer, embedded system, or supercomputer, provides OpenCL runtime that leverages computations onto the computing device. The computing device may be a CPU, GPU, DSP or an accelerator such as SPE in IBM Cell processor, which executes OpenCL kernels written in C99-based OpenCL language. Each device has several computing units consisting of multiple processing elements and its device memory (Global/Constant Memory) to store input data. In GPUs, each SIMD unit can be treated as a compute device whereas each of the eighteen SIMD engines in ATi Radeon HD5850 can be considered as compute devices in OpenCL. The same is true for SMs in GTX285. Each compute device consists of multiple processing elements, which are stream cores in ATi GPUs and streaming processors in nVidia GPUs.

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024)
Volume 15: 2 Issues (2023)
Volume 14: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 13: 4 Issues (2021)
Volume 12: 4 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing