GPGPU as a Service: Providing GPU-Acceleration Services to Federated Cloud Systems

GPGPU as a Service: Providing GPU-Acceleration Services to Federated Cloud Systems

Javier Prades, Fernando Campos, Carlos Reaño, Federico Silla
Copyright: © 2016 |Pages: 33
DOI: 10.4018/978-1-5225-0153-4.ch010
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Current data centers leverage virtual machines (VMs) in order to efficiently use hardware resources. VMs allow reducing equipment acquisition costs as well as decreasing overall energy consumption. However, although VMs have noticeably evolved to make a smart use of the underlying hardware, the use of GPUs (Graphics Processing Units) for General Purpose computing (GPGPU) is still not efficiently supported. This concern might be addressed by remote GPU virtualization solutions, which may provide VMs with GPUs located in a remote node, detached from the host where the VMs are being executed. This chapter presents an in-depth analysis about how to provide GPU access to applications running inside VMs. This analysis is complemented with experimental results which show that the use of remote GPU virtualization is an effective mechanism to provide GPU access to applications with negligible overheads. Finally, the approach is presented in the context of cloud federations for providing GPGPU as a Service.
Chapter Preview
Top

Introduction

Virtual machines (VMs) are a well-known and established technology commonly used in nowadays data centers due to their demonstrated ability to provide economic savings. These savings are originated in the fact that several VMs can be concurrently executed in the same host computer, allowing for what is commonly known as server consolidation. In this way, different VMs share the hardware resources of such computer, thus increasing their overall utilization, what allows faster amortizing the initial economic investment done for the acquisition of the data center equipment. Furthermore, initial acquisition costs can also be reduced given that a smaller amount of computers is required to address the same workload. Moreover, the use of VMs also reduces the operation costs of data centers, given that the reduced size of the equipment requires a smaller amount of energy to be operated, what additionally translates into lower cooling requirements. The smaller electricity bill causes that the stage where economic benefits are provided is reached earlier, thus ensuring the viability of the company that runs the data center. All these features have caused that VMs become the building block in cloud systems, where VMs are dynamically created and destroyed under customers’ demand, thus allocating the computing resources only during the utilization time. This allows the cloud platform provider to present VMs to their customers as dedicated resources while the needs of real resources are dramatically decreased.

Given the many benefits reported by the use of VM, several VMMs (Virtual Machine Monitors) can be currently found, such as VirtualBox (Oracle, 2015), VMware (VMware, 2015), KVM (KVM, 2015), and Xen (Xen, 2105). Actually, the advantages provided by these virtualization technologies have motivated the main processor manufacturers, such as Intel or AMD, to include virtualization support into their designs (Semnanian, 2011), so that VMs become an even more efficient way to achieve server consolidation. In this way, although VMs reduced application performance in the past with respect to executions in the native (or real) domain, current VMMs, in conjunction with modern hardware, are able to host VMs where the execution of applications is carried out without significant performance losses, as shown in (Felter, 2014). Furthermore, in addition to CPUs, other computer components have also been enhanced in order to efficiently support the use of VMs. This is the case, for example, of network adapters. In this regard, not only those at the highest performance end, as Mellanox Technologies with the InfiniBand network cards, but also other manufacturers of more basic technologies, such as Intel with its Ethernet NICs (Network Interface Controller), have incorporated hardware supported virtualization mechanisms into their designs. These mechanisms, known as Single Root I/O Virtualization, basically allow replicating, at the logical level, the network card (or, more accurately, creating virtual instances of the card, named virtual functions) so that each of the virtual functions is assigned to one of the virtual machines running in the host computer.

Complete Chapter List

Search this Book:
Reset