Enhancing Security in a Big Stream Cloud Architecture for the Internet of Things Through Blockchain

Enhancing Security in a Big Stream Cloud Architecture for the Internet of Things Through Blockchain

Luca Davoli, Laura Belli, Gianluigi Ferrari
DOI: 10.4018/978-1-7998-5351-0.ch068
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The Internet of Things (IoT) paradigm is foreseeing the development of our environment towards new enriched spaces in most areas of modern living, such as digital health, smart cities, and smart agriculture. Several IoT applications also have real-time and low-latency requirements and must rely on specific architectures. The authors refer to the paradigm that best fits the selected IoT scenario as “Big Stream” because it considers real-time constraints. Moreover, the blockchain concept has drawn attention as the next-generation technology through the authentication of peers that share encryption and the generation of hash values. In addition, the blockchain can be applied in conjunction with Cloud Computing and the IoT paradigms, since it avoids the involvement of third parties in a broker-free way. In this chapter, an analysis on mechanisms that can be adopted to secure Big Stream data in a graph-based platform, thus delivering them to consumers in an efficient and secure way, and with low latency, is shown, describing all refinements required employing federation-based and blockchain paradigms.
Chapter Preview
Top

Introduction

Considering the last 15 years, the forecast of a worldwide network of pervasively deployed and connected heterogeneous networks is now a reality. The Internet of Things (IoT) is now involving billions of different devices, connected in an Internet-like structure, and has definitely changed the way in which people and things interact, in several aspects of our modern living. The actors involved in IoT scenarios have extremely heterogeneous characteristics, in terms of energy supply and consumption, processing and communication features, and availability and mobility, spanning from Smart Objects (SOs) - i.e., constrained devices equipped with actuators or sensors, smartphones, wearable devices and other personal ones - to Internet hosts and the Cloud.

In order to allow heterogeneous nodes to efficiently communicate with each other and with existing Internet actors, shared and interoperable communication mechanisms and protocols are currently being defined and standardized. The most prominent driver for interoperability in the IoT is the Internet Protocol (IP), namely its 128-bit version called IPv6. An IP-based IoT can extend and operate with all existing Internet nodes, without any additional efforts. Standardization institutions, such as the Internet Engineering Task Force (IETF) and several research projects, are contributing to the definition of mechanisms able to bring IP to SOs (e.g., the 6LoWPAN (Kim, Kaspar, & Vasseur, 2012) adaptation layer). This is motivated by the need to adapt higher-layer protocols (e.g., application-layer protocols) to constrained environments. As a result, IoT networks are expected to generate huge amounts of traffic, whose transmitted data can be subsequently processed and used to build several useful services for end users. In this way, the Cloud has become the natural collection environment for sensed data retrieved by IoT nodes, due to its cost-effectiveness, scalability, and robustness. In Figure 1, the hierarchy of different levels involved in data collection, processing and distribution in a typical IoT scenario is shown.

Figure 1.

Actors involved in an IoT and Cloud platform: data generated by IoT networks are sent to the Cloud, where services are provided to consumers. An intermediate level, performs local operations, such as data collection, processing, and distribution.

978-1-7998-5351-0.ch068.f01

Sensed data are collected by SOs composing the IoT networks and sent uplink to the Cloud, which operates as a collection entity and service provider. In some cases, intermediate processing entities, identified as Local Network Collectors (LNCs), can perform some preliminary tasks on the traffic before sending data uplink, such as protocol translation, data aggregation, and temporary data storage. This layered model is extremely general and can be applied to several IoT scenarios, in which, as an example, the LNCs functionalities can be impersonated by proxies or border routers.

Several relevant IoT application environments (e.g., industrial monitoring, automation, and transportation) aften require real-time performance guarantees or, at least, a predictable latency. Moreover, the performance requirements (e.g., in terms of data sources) may change even abruptly. The potentially large number of IoT nodes, acting as data sources and generating a high rate of incoming data, and the low-latency constraints, call for innovative Cloud architectures able to efficiently handle such massive information amount.

A possible and suitable solution is given by Big Data approaches, developed in the last few years and become popular due to the evolution of online and social/crowd services, which can address the need to process extremely large amounts of heterogeneous data for various purposes and coming from very diverse sources. However, these techniques typically focus on the data and have an intrinsic inertia (as they are based on batch processing), rather than providing real-time processing and dispatching (Zaslavsky, Perera, & Georgakopoulos, 2013; Leavitt, 2013). For this reason, Big Data approaches might not represent the right solution to manage the dynamicity of IoT scenarios with real-time processing. In order to better fit these requirements, it is possible to shift the Big Data paradigm to the “Big Stream” paradigm.

While both paradigms deal with massive amounts of data, Big Data and Big Stream paradigms differ in the following aspects.

Complete Chapter List

Search this Book:
Reset