Using Inter-Block Synchronization to Improve the Knapsack Problem on GPUs

Using Inter-Block Synchronization to Improve the Knapsack Problem on GPUs

Xue Sun, Chao-Chin Wu, Liang-Rui Chen, Jian-You Lin
Copyright: © 2018 |Pages: 16
DOI: 10.4018/IJGHPC.2018100105
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This article describes how as one of the hot parallel processors, the general-purpose graphics processing unit (GPU) has been widely adopted to accelerate various time-consuming algorithms. Dynamic programming (DP) optimization is a popular method to solve a particular class of complex problems. This article focuses on serial-monadic DP problems onto NVIDIA GPUs. As 0/1 knapsack is one of the most representational problems in this category and it often arises in many other fields of applications. The previous work proposed the compression method to reduce the amount of data transferred, but data in shared memory cannot be reused. This article demonstrates how to apply a more condensed data structure and the inter-block synchronization to efficiently map the serial-monadic DP onto GPUs. Computational experiments reveal that the best performance improvement of the approach is about 100% comparing with the previous work.
Article Preview
Top

Introduction

Graphics Processing Units (GPUs) have become a powerful tool for high-performance computing in recent years. As specialized computer processors, GPUs can manipulate large amounts of data efficiently by highly parallel multi-core system. For providing low-level control of GPUs, NVIDIA has introduced CUDA (Compute Unified Device Architecture) to ease programming for various types of parallel applications (NVIDIA GPU, 2015). The CUDA platform, which can be designed to write programs with C, C++, FORTRAN, OpenCL and other languages, can access to virtual instruction set of GPUs and parallel computational elements directly with the execution of compute shaders (Abi-Chahla, 2015). Using CUDA GPUs to solve large-scale or complex optimization problems is a big challenge and hot issues in research.

Dynamic programming (DP), also known as dynamic optimization, is a popular method to solve a particular class of complex problems. DP method breaks up a problem into a collection of smaller sub-problems, then solves each of them only once and saves the results for future reference so as to avoid recomputation. DP can be classified into four categories: serial-monadic, non-serial-monadic, serial-polyadic and non-serial-polyadic (Grama, Gupta, Karypis, Kumar, 2003). In our previously research, we have studied how to use CUDA GPUs to accelerate the solutions of non-serial-polyadic DP optimization problems, where we chose Optimal Matrix Parenthesization (OMP) problem as an example to study (Wu, Ke, Lin, and Feng, 2011). In this work, we focus on serial-monadic DP problems onto NVIDIA GPUs. As 0/1 knapsack is one of the most representational problems in this category and it often arises in many other fields of applications, we adopted 0/1 knapsack problem for this study.

0/1 knapsack problem is known as a NP-hard problem. For high time complexity, speeding up the solution of the problems has attracted concern of researchers (Boyer, El Baz, and Elkihel, 2012; Goldman and Trystram, 2004; El Baz and Elkihel, 2005; El Baz and M. E. D., 2006; Li, and Liu, 2015; Hajarian, Shahbahrami, and Hoseini, 2016). To parallel DP approach for 0/1 knapsack problem, recently Boyer and his colleagues proposed a compression mechanism to implement it on CUDA GPUs (Boyer et al., 2012). The method mainly used the item selection table, which is saved as a 2-dimentional (2D) array, to record if an item is selected or not for each capacity. Assuming the maximum capacity of the knapsack is c, and there are n items to be prepared for putting into the knapsack, so the dimension of the 2D array is IJGHPC.2018100105.m01. If n and c are large integers, the 2D array needs a large amount of global memory space. To address the problem, they used a data compression technique which is called the row-compression method to reduce significantly memory occupancy and limit the communication between the CPU and the GPU.

However, we observed that the method of Boyer has two disadvantages. First, although the compression method can reduce the amount of data transmission and the memory occupancy effectively, if the item table is one-dimensional (1D) array instead of 2D, the data can be minimized more significantly. Second, that Boyer’s method invokes the same kernel one time for each stage of the DP problem that cannot reuse the data in shared memory. To address these problems, we applied a more condensed data structure and provided the inter-block synchronization to improve the parallelism degree.

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024)
Volume 15: 2 Issues (2023)
Volume 14: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 13: 4 Issues (2021)
Volume 12: 4 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing