Article Preview
TopIntroduction
Graphics Processing Units (GPUs) have become a powerful tool for high-performance computing in recent years. As specialized computer processors, GPUs can manipulate large amounts of data efficiently by highly parallel multi-core system. For providing low-level control of GPUs, NVIDIA has introduced CUDA (Compute Unified Device Architecture) to ease programming for various types of parallel applications (NVIDIA GPU, 2015). The CUDA platform, which can be designed to write programs with C, C++, FORTRAN, OpenCL and other languages, can access to virtual instruction set of GPUs and parallel computational elements directly with the execution of compute shaders (Abi-Chahla, 2015). Using CUDA GPUs to solve large-scale or complex optimization problems is a big challenge and hot issues in research.
Dynamic programming (DP), also known as dynamic optimization, is a popular method to solve a particular class of complex problems. DP method breaks up a problem into a collection of smaller sub-problems, then solves each of them only once and saves the results for future reference so as to avoid recomputation. DP can be classified into four categories: serial-monadic, non-serial-monadic, serial-polyadic and non-serial-polyadic (Grama, Gupta, Karypis, Kumar, 2003). In our previously research, we have studied how to use CUDA GPUs to accelerate the solutions of non-serial-polyadic DP optimization problems, where we chose Optimal Matrix Parenthesization (OMP) problem as an example to study (Wu, Ke, Lin, and Feng, 2011). In this work, we focus on serial-monadic DP problems onto NVIDIA GPUs. As 0/1 knapsack is one of the most representational problems in this category and it often arises in many other fields of applications, we adopted 0/1 knapsack problem for this study.
0/1 knapsack problem is known as a NP-hard problem. For high time complexity, speeding up the solution of the problems has attracted concern of researchers (Boyer, El Baz, and Elkihel, 2012; Goldman and Trystram, 2004; El Baz and Elkihel, 2005; El Baz and M. E. D., 2006; Li, and Liu, 2015; Hajarian, Shahbahrami, and Hoseini, 2016). To parallel DP approach for 0/1 knapsack problem, recently Boyer and his colleagues proposed a compression mechanism to implement it on CUDA GPUs (Boyer et al., 2012). The method mainly used the item selection table, which is saved as a 2-dimentional (2D) array, to record if an item is selected or not for each capacity. Assuming the maximum capacity of the knapsack is c, and there are n items to be prepared for putting into the knapsack, so the dimension of the 2D array is
. If n and c are large integers, the 2D array needs a large amount of global memory space. To address the problem, they used a data compression technique which is called the row-compression method to reduce significantly memory occupancy and limit the communication between the CPU and the GPU.
However, we observed that the method of Boyer has two disadvantages. First, although the compression method can reduce the amount of data transmission and the memory occupancy effectively, if the item table is one-dimensional (1D) array instead of 2D, the data can be minimized more significantly. Second, that Boyer’s method invokes the same kernel one time for each stage of the DP problem that cannot reuse the data in shared memory. To address these problems, we applied a more condensed data structure and provided the inter-block synchronization to improve the parallelism degree.