Hardware and Software Aspects of VM-Based Mobile-Cloud Offloading

Hardware and Software Aspects of VM-Based Mobile-Cloud Offloading

Yang Song, Haoliang Wang, Tolga Soyata
DOI: 10.4018/978-1-4666-8662-5.ch008
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

To allow mobile devices to support resource intensive applications beyond their capabilities, mobile-cloud offloading is introduced to extend the resources of mobile devices by leveraging cloud resources. In this chapter, we will survey the state-of-the-art in VM-based mobile-cloud offloading techniques including their software and architectural aspects in detail. For the software aspects, we will provide the current improvements to different layers of various virtualization systems, particularly focusing on mobile-cloud offloading. Approaches at different offloading granularities will be reviewed and their advantages and disadvantages will be discussed. For the architectural support aspects of the virtualization, three platforms including Intel x86, ARM and NVidia GPUs will be reviewed in terms of their special architectural designs to accommodate virtualization and VM-based offloading.
Chapter Preview
Top

Introduction

In the past decade, significant technological advances in the semiconductor technology have dramatically improved the computational and storage capability of handheld mobile devices such as smart phones and tablets. This enabled mobile devices not only to access a vast amount information instantaneously through fast communications networks, but also to perform ever more sophisticated computational tasks such as face and speech recognition, object detection and natural language processing (NLP) pervasively (Wang, Liu, & Soyata, 2014). However, the performance and user experience of these resource-intensive mobile augmented-reality applications are still constrained by the relatively low performance CPU and GPU, as well as limited memory and flash storage of the mobile devices. These resource constraints cannot be easily improved due to the relative size and battery life limitations of mobile devices, as compared to mainstream desktop PCs. Therefore, many applications, which are both latency-sensitive and compute-intensive, such as real-time face recognition, are still beyond the capabilities of today's smartphones and tablets.

To overcome these resource limitations and extend the capabilities of mobile devices to the point, where they can run these resource-intensive applications, mobile-cloud computing (MCC) was introduced to leverage the cloud resources. MCC enables mobile devices to utilize powerful cloud servers to store and access a vast amount of data and process compute-intensive tasks. Mobile-cloud computing has been intensively investigated as an integration of cloud computing into the mobile environment. Utilizing cloud servers for storage is easy and there have already been many popular applications providing data backup and sharing features between the users and the cloud. Unlike storage, utilizing cloud servers for computation acceleration is not trivial. Computation offloading is a solution to alleviate resource limitations on the mobile devices and provide more capabilities for these devices by migrating partial or full computations (code, status and data) to more resourceful computers. The rapid advancement of wireless network connectivity and architectural advancements in mobile devices in recent years have made computation offloading feasible. Currently, offloading computation from mobile devices to cloud servers faces several challenges which is what most of the research in this field focuses on. These challenges are summarized below:

  • What to Offload: The entire program cannot be offloaded for remote execution. Before offloading, the program needs to be partitioned in one of three ways: 1) manually by the programmer, 2) automatically by the compiler, or 3) at runtime. Manual partitioning will put the burden on the programmers but will potentially result in lower overhead and more flexibility. On the contrary, the automated partitioning can perform offloading on an unmodified program which is more convenient for users, but might result in a higher performance overhead. Different strategies like code tagging and dynamic prediction based on profiling can be applied to increase the performance.

  • When to Offload: Applications may have different requirements on performance and mobile devices may have different capabilities and energy limitations. Offloading decisions need to be made based on multiple criteria, such as 1) improving performance when the remaining energy is abundant, 2) energy savings when the remaining energy is low, and 3) network conditions at runtime. These decisions can be made by statically and/or dynamically by profiling, which has an impact on the execution overhead.

  • How to Offload: Emerging cloud computing technologies, combined with virtualization technologies provide a powerful, flexible, manageable and a secure platform for offloading. This attracted a large body of research on VM (Virtual Machine)-based offloading approaches, which study offloading at different granularities such as OS-level, application/thread-level and method-level.

Complete Chapter List

Search this Book:
Reset