Performance Evaluation of Xen, KVM, and Proxmox Hypervisors

Performance Evaluation of Xen, KVM, and Proxmox Hypervisors

Sultan Abdullah Algarni (Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia), Mohammad Rafi Ikbal (King Abdulaziz University, Jeddah, Saudi Arabia), Roobaea Alroobaea (College of Computers and Information Technology, Taif University, Ta'if, Saudi Arabia), Ahmed S. Ghiduk (College of Computers and Information Technology, Taif University, Ta'if, Saudi Arabia) and Farrukh Nadeem (Department of Information Systems, King Abdulaziz University, Jeddah, Saudi Arabia)
Copyright: © 2018 |Pages: 16
DOI: 10.4018/IJOSSP.2018040103

Abstract

Hardware virtualization plays a major role in IT infrastructure optimization in private data centers and public cloud platforms. Though there are many advancements in CPU architecture and hypervisors recently, but overhead still exists as there is a virtualization layer between the guest operating system and physical hardware. This is particularly when multiple virtual guests are competing for resources on the same physical hardware. Understanding performance of a virtualization layer is crucial as this would have a major impact on entire IT infrastructure. This article has performed an extensive study on comparing the performance of three hypervisors KVM, Xen, and Proxmox VE. The experiments showed that KVM delivers the best performance on most of the selected parameters. Xen excels in file system performance and application performance. Though Proxmox has delivered the best performance in only the sub-category of CPU throughput. This article suggests best-suited hypervisors for targeted applications.
Article Preview
Top

1. Introduction

The advent of hardware virtualization technology has laid the foundation for many advanced technologies, such as cloud computing, IT infrastructure optimization and consolidation, disaster recovery, high availability and green computing. Hardware virtualization allows many guest operating systems to share the same hardware, as shown in Figure 1. This is done by installing a hypervisor on physical hardware and then installing guests on top of the hypervisor. The hypervisor manages all the physical resources of the host system, like CPU, memory, network and storage.

Hypervisors allow not only multiple guest operating systems to share the same physical hardware, but also allow the sharing of abstract physical hardware such that guest operating systems assume that they are running on physical hardware. This abstraction has many advantages. Hypervisors simplify resource management, speed up deployment, use resources more efficiently and offer better control over infrastructure.

Hypervisors can create pseudo hardware resources for guests that are idle and use these resources for guests that are loaded and in need of resources. This also helps with many advanced features like thin provisioning, virtual machine migration and high availability. However, the disadvantage of the presence of this abstraction prevent it from being used in mission-critical and performance-demanding applications like high-performance computing,

Figure 1.

Typical virtualization architecture

IJOSSP.2018040103.f01

In cloud computing, both private and public clouds leverage features of hypervisors to deliver infrastructure as a service (IaaS) to meet end user demands like instant operating system deployment, storage allocation, network management and configuration. These cloud infrastructures are easily scalable and flexible as virtual servers can be created and customized in almost no time. IT infrastructure consolidation is also an area where virtualization is implemented. One single physical server is used to deploy multiple diverse operating systems as per end-user requirements, maximizing resource utilization and reducing operating costs for power, cooling and rack space.

The success of virtualized IT infrastructure depends on physical hardware and hypervisors. There is continuous development in chip technologies, thus server hardware is getting better over time. This paper performs a comprehensive performance evaluation and benchmark three main open source hardware-assisted hypervisors: XenServer (2017), KVM (Kernel Virtual Machine) (Linux, 2017) and Proxmox VE (2017) on different areas, such as response efficiency, CPU cache & throughput, performance of Memory & Disk & Application. The aim of this research is to enable researchers and end users to determine which hypervisors deliver the best performance for the targeted applications to meet their needs based on hands on experiment.

The remainder of this work is structured as follows. The following section presents the related work. Section 3 demonstrates a short explanation of the selected hypervisors Xen, KVM and Proxmox. Following that, section 4 shows the details of the selected parameters and benchmarks in the experiment, the setup of the experiment and the results of the evaluation of the three hypervisors. Section 5 presents the conclusion of this paper and some future work issues.

Top

There are countless studies evaluating the performance of hypervisors from various perspectives. Many authors focus on analyzing the complete cloud computing which uses hypervisor such as Nadeem and Qaiser (2015), Paradowski, Liu, and Yuan (2014), Al-mukhtar (2014), Younge et al. (2011), and Graniszewski and Arciszewski (2016). Paradowski et al. (2014) used a mutual hypervisor under defined criteria. This study used various resources performances such as CPU, RAM and hard disk size. The results indicated that OpenStack outperformed CloudStack based on the benchmark criteria.

Complete Article List

Search this Journal:
Reset
Open Access Articles
Volume 11: 4 Issues (2020): Forthcoming, Available for Pre-Order
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 1 Issue (2015)
Volume 5: 3 Issues (2014)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing