Lightweight Virtualization for Edge Computing

Lightweight Virtualization for Edge Computing

Fabio Diniz Rossi, Bruno Morais Neves de Castro, Matheus Breno Batista dos Santos
DOI: 10.4018/978-1-6684-5700-9.ch013
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In infrastructure as a service (IaaS), the edge computing paradigm proposes the network node distribution of storage and computing resources to ensure swift access throughout a fog environment. The edge platform landscape fragmentation requires flexible and scalable approaches. Based on the above, the most recent works highlight lightweight virtualization, the process of making any hardware shares its resources with other applications without impacting on performance issues. In this sense, this chapter conveys current concepts, techniques and open challenges of lightweight virtualization for edge computing.
Chapter Preview
Top

Introduction

Internet of Things (IoT) development technologies surrounds the world promoting massive impact on numerous industries and fields of humanity (Vaquero & Rodero-Merino, 2014). Strictly linked to sensor networks, IoT huge capacity of data collection brought new insights into the world, from connecting ordinary objects to build a vast network of sensors.

Based on the market, IoT brings meaning to the concept of ubiquitous connectivity for businesses, governments, and customers with its innate management, monitoring, and analytics. This IoT distinct, innovative potential swiftly embraced Cloud computing comprehensive services exponential growth, suddenly proving its utility on automating ordinary tasks (Galante & Bona, 2012) and, consequently, generating tons of data to be interpreted in an online environment.

The exponential requisition of the cloud-based centralized gateway infrastructure prompted inefficiency to the real-time services supply chain, where delay barrier constraints the development of self-driven cars and other low-latency IoT initiatives. This new obstacle promoted scientific approaches around edge computing concept of network node distributed resources intermediate layer, where the servers reversely dispatch data processing to constrained devices deployed at the network edge, providing low-latency domain to data generators (Shi, Cao, Zhang, Li, & Xu, 2016).

Figure 1 demonstrates the entire Fog computing architecture, where we have multiple sensors distributed geographically, searching data from various sources, and sending such data in real time to be processed. On the other hand, a set of servers running cloud services must receive such a massive amount of data and turn it into information. In order to bring cloud services closer to customers (in this case represented by IoT sensors), parts of the cloud services can pre-process on intermediate network devices, known as edge devices (Shi & Dustdar, 2016).

Figure 1.

Fog architecture

978-1-6684-5700-9.ch013.f01

Supported by these concepts, many past works highlight the use of virtualization on top of edge devices due to its potential elasticity. Despite traditional hardware and software configuration for a dedicated server, virtualization allows running multiple OS and applications over same hardware. Being Lightweight Virtualization more adaptable and versatile than traditional Hypervisors techniques, it has the independence of OS base and works without virtualization of hardware. This somewhat disruptive technology can lead to faster initialization, lower system overhead and, lastly, excellent energy efficiency to the nodes (Xavier, Neves, Rossi, Ferreto, Lange, & De Rose, 2013).

Over the last decade, the proposal of Lightweight Virtualization implementation to edge-driven IoT has become popular as a feature to fulfill network scalability, multi-tenancy, and privacy. A direct benefit that emerges from employing Lightweight Virtualization in the IoT edge domain is avoiding the strict dependency on any given technology or use case (Morabito et al., 2018), which, in IoT heterogeneous environment can provide the flexibility to connect to any device and distribute computational and database services around the edge network seamlessly.

Virtualization support enables the possibility of Network Function Virtualization (NFV) over such devices. Based on this, the service provider can deploy various types of network and computing services in the form of micro clouds. Virtualization on edge devices minimizes investment as there is no need for a massive centralized infrastructure. Services are instantiated depending on consumer demand. Also, the core infrastructure does not need significant modifications to accommodate any service (Chiosi, Clarke, Willis, Reid, Feger, Bugenhagen, & Benitez, 2012; Morabito et al., 2018; Vaquero & Rodero-Merino, 2014; Chiosi, Clarke, Willis, Reid, Feger, Bugenhagen, & Benitez, 2012).

Complete Chapter List

Search this Book:
Reset