Fog Resource Allocation Through Machine Learning Algorithm

Fog Resource Allocation Through Machine Learning Algorithm

Gowri A. S., Shanthi Bala P.
Copyright: © 2020 |Pages: 41
DOI: 10.4018/978-1-7998-0194-8.ch001
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Internet of things (IoT) prevails in almost all the equipment of our daily lives including healthcare units, industrial productions, vehicle, banking or insurance. The unconnected dumb objects have started communicating with each other, thus generating a voluminous amount of data at a greater velocity that are handled by cloud. The requirements of IoT applications like heterogeneity, mobility support, and low latency form a big challenge to the cloud ecosystem. Hence, a decentralized and low latency-oriented computing paradigm like fog computing along with cloud provide better solution. The service quality of any computing model depends on resource management. The resources need to be agile by nature, which clearly demarks virtual container as the best choice. This chapter presents the federation of Fog-Cloud and the way it relates to the IoT requirements. Further, the chapter deals with autonomic resource management with reinforcement learning (RL), which will forward the fog computing paradigm to the future generation expectations.
Chapter Preview
Top

Introduction

Innovations in the Internet of Things (IoT) open an era for a quality life. The exponential growth in IoT also raises issues on the management of those resources that are used for its computation and storage. One of the main challenges of IoT services is the low latency requirement (Ray, 2018). As of now, the IoT services are either dealt with the cloud or edge computing. Though edge computing provides faster response, it neither permits sharing of data nor allows real-time analytics (due to their limited compute/storage power). The cloud on the other hand, in spite of its capability to support big data analytics, causes an intolerable delay in latency constraint IoT environment. That means cloud computing reduces the possibilities of acting over the situation by the time the data/request reaches the cloud. In real time application like emergency health care or hazardous industrial applications, a small delay may cost a life or some severe catastrophic damages. Therefore, cloud computing is not an advisable solution for IoT applications where low latency is of utmost priority. To overcome the limitations of edge and cloud, as the middle layer, fog computing proves itself as the best solution, especially for delay sensitive applications of enormous IoT devices.

Fog Computing is a distributed computing paradigm that brings cloud services near to the edge network of the IoT devices where the data/request is generated (Iorga et al., 2018). The fog layer as shown in Figure 1 performs the preprocessing of the data that are collected from various edge devices and moves only the necessary information on to the cloud as and when required. The filtered and preprocessed data can be later used by the cloud for historical analytics in order to reveal better insights for business (Aazam, M., & Eui-Nam Huh. 2015). Hence, for delay sensitive applications, timely solution is provided in the fog itself rather than overwhelming the communication network towards cloud. Thus by saving bandwidth, fog reduces the unnecessary overload on the cloud. Besides deciding, which or what type of data/request are to be forwarded to the cloud, dealing the resource management in fog is one of the key tasks as it has a significant impact on the low latency and power conservation.

Figure 1.

Fog computing paradigm

978-1-7998-0194-8.ch001.f01
(Aazam, M., & Eui-Nam Huh. 2015)

The existing programming paradigms and management tools are inadequate to handle the complexity, heterogeneity and scalability requirements of the IoT devices dynamically. Apart from automated software-defined solutions, an autonomic system that can manage the resource allocation by itself with intelligence is required. Machine learning techniques like Reinforcement learning (RL) will better suite, to achieve efficient resource management in fogs (Dutreilh, Kirgizov, Melekhova, Malenfant, & Rivierre, 2011). The RL equipped fog control nodes can solve the resource management issues in fog at a paramount level of Quality of service (QoS).

The chapter is organized as follows. The next section discusses the various works related to resource allocation in fog computing. Then the Significance of fog computing briefs the necessity of fog for IoT applications. Followed by it, the various service models, deployment models, characteristics and benefits of fog computing are described. The chapter then gives basic information about virtual containers which is considered as the compute/storage resource of the micro datacenters in fog computing. Then the chapter deals with the fundamental knowledge of autonomic computing and reinforcement learning in the context of resource provision in fog. Next, the chapter introduces the proposed fog-cloud layered architecture which elaborates about the hierarchy in which devices are layered from the end device-fog layer-cloud. Then the proposed work, autonomic resource management combined with reinforcement learning is described in detail with code. Finally, the security and privacy issues of fog computing are discussed followed by a conclusion and future enhancements.

Top

Some of the research works that are carried out in the field of autonomic resource allocation, reinforcement learning for resource provisioning, federation of fog and cloud for efficient resource management are discussed in in this section. The section brings into focus the similarity and conflicts of the proposed work of this chapter with the existing works in the area of fog resource management.

Complete Chapter List

Search this Book:
Reset