A High Performance Parallel Ranking SVM with OpenCL on Multi-core and Many-core Platforms

A High Performance Parallel Ranking SVM with OpenCL on Multi-core and Many-core Platforms

Huming Zhu, Pei Li, Peng Zhang, Zheng Luo
Copyright: © 2019 |Pages: 12
DOI: 10.4018/IJGHPC.2019010102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

A ranking support vector machine (RSVM) is a typical pairwise method of learning to rank, which is effective in ranking problems. However, the training speed of RSVMs are not satisfactory, especially when solving large-scale data ranking problems. Recent years, many-core processing units (graphics processing unit (GPU), Many Integrated Core (MIC)) and multi-core processing units have exhibited huge superiority in the parallel computing domain. With the support of hardware, parallel programming develops rapidly. Open Computing Language (OpenCL) and Open Multi-Processing (OpenMP) are two of popular parallel programming interfaces. The authors present two high-performance parallel implementations of RSVM, an OpenCL version implemented on multi-core and many-core platforms, and an OpenMP version implemented on multi-core platform. The experimental results show that the OpenCL version parallel RSVM achieved considerable speedup on Intel MIC 7110P, NVIDIA Tesla K20M and Intel Xeon E5-2692v2, and it also shows good portability.
Article Preview
Top

Introduction

Learning to rank (LTR) is a kind of supervised learning method, and it has been proved effective in ranking problem. LTR has been widely applied in the field of information retrieval (IR), data mining, text retrieval, anti-spam, keyword extraction, and search engines (McFee & Lanckriet, 2010; Pan & Luo, 2011; Li & Zhou, 2013). LTR has developed into a series of mature methods, which can be divided into three categories: 1) Point-wise approaches (Li & Burges, 2007), 2) Pair-wise approaches (Freund & Iyer, 2003; Matveeva & Burges, 2006), 3) List-wise approaches (Cao & Qin, 2007; Xu & Huang, 2007). Ranking SVM (RSVM) is a typical algorithm of Pair-wise approaches, its main idea is to transform a target data points ranking problem into a binary classification problem of ordered Pair-wise points, and solve it with support vector machine(SVM) (Joachims, 2002; Cao & Xu, 2006).

RSVM has many unique advantages in dealing with small samples, nonlinear and high dimensional ranking problems, but like most of learning to rank methods, RSVM suffered a computation bottleneck in training, especially when solving large-scale data ranking problem. Thus, accelerating the training process of RSVM is a very valuable work. As known to all, the most time-consuming part of SVM is to solve the quadratic programming (QP) problem. Sequential minimal optimization (SMO) algorithm is a popular and efficient solution of QP problem that can reduce the training time of SVM (Platt & Scholkopf, 1999). Thus, using parallel programming to accelerate SMO algorithm is a promising method to reduce the training time of RSVM.

Recent years, the rapid development of multi-core and many-core platform set off an upsurge of parallel programing. And that leads to appearance of many parallel programming models (CUDA, OpenCL, OpenMP, etc.). Under this circumstances, parallel programming has become a routine tool for accelerating algorithms in varieties of domains, including image processing, scientific computing and data mining (Andrade & Trabasso, 2017; Schmiescheka & Shamardin, 2017; Yu, & Liu, 2015). Thus, the researchers suggest that accelerating RSVM with parallel programming is a practical method (Ksiaâ & Ben Rejab, 2017). There are several researches about accelerating SVM has been done in these years. Previous research focuses on either designing efficient SVM tool for multi-core CPUs and many-core co-processors(MIC), or developing parallel strategies of SVM on GPU (Yan & Ren, 2015; You & Song, 2014; Kim & Kim, 2016). All these experiments achieved considerable speedups when comparing with serial algorithm. However, none of the research has realized a portable implementation of SVM for different platforms and compared the performance of RSVM with different parallel programming models when there are varieties of computing platforms and programming models (Pennycook & Hammond, 2013; Zhang & Sinclair, 2013).

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024)
Volume 15: 2 Issues (2023)
Volume 14: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 13: 4 Issues (2021)
Volume 12: 4 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing