Service-Centric Networking

Service-Centric Networking

David Griffin, Miguel Rio, Pieter Simoens, Piet Smet, Frederik Vandeputte, Luc Vermoesen, Dariusz Bursztynowski, Folker Schamel, Michael Franke
DOI: 10.4018/978-1-4666-8371-6.ch004
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter introduces a new paradigm for service centric networking. Building upon recent proposals in the area of information centric networking, a similar treatment of services – where networked software functions, rather than content, are dynamically deployed, replicated and invoked – is discussed. Service-centric networking provides the mechanisms required to deploy replicated service instances across highly distributed networked cloud infrastructures and to route client requests to the closest instance while providing more efficient network infrastructure usage, improved QoS and new business opportunities for application and service providers.
Chapter Preview
Top

Introduction

There is an emerging trend for more demanding services to be deployed across the Internet and in the cloud. Applications such as virtual and augmented reality, vehicle telematics, self-navigating cars/drones and multi-user ultra-high-definition telepresence are envisioned beyond the social and office-based applications such as email and photo sharing applications common in today’s cloud computing world. While future deployments such as 5G and all-optical networks are aiming to reduce network latency to below 5ms and increase throughput by up to 1000 times (Huawei, 2013) over both fixed and mobile networks, new techniques for efficiently deploying replicated services close to users and the means for selecting between them at request/invocation time are required. Deploying such highly demanding services and providing the network capabilities to access them requires a focused approach, combining the features of service management and orchestration with dynamic service resolution and routing mechanisms leading to Service-centric Networking, the subject of this chapter. The focus of this chapter is how to deploy low latency, high bandwidth services on today’s IP infrastructures, but as the next generation of wireless and optical networks are rolled out, service-centric networking techniques for the localisation of processing nodes and the selection of running instances will become even more crucial for supporting the vision of the tactile Internet (Fettweis, 2014).

The Internet was originally conceived as a data communications network to interconnect end-hosts: user terminals and servers. The focus was on delivering data between end points in the most efficient manner. All data was treated in the same way: as the payload of packets addressed for delivery to a specific end-point. In recent years, since the development of the world-wide web, the majority of traffic on the Internet originates from users retrieving content. The observation that many users were downloading the same content led to the development of content delivery/distribution networks (CDNs). CDNs cache content closer to the users to reduce inter-provider traffic, and improve users’ quality of experience by reducing server congestion through load balancing requests over multiple content replicas. In a content-centric world, communications are no longer based around interconnecting end-points, but are concerned with what is to be retrieved rather than where it is located. CDNs achieve this by building overlays on top of the network layer but recent research in the domain of Information-Centric Networking has taken matters a stage further by routing requests for named content to caches that are dynamically maintained by the network nodes themselves, rather than having predefined locations of the content, pushed a priori based on predicted demand. Such an approach represents a basic paradigm shift for the Internet.

Although content/information centric networking has received significant attention recently, the approach, like classical CDNs, was originally designed for the delivery of non-interactive content and additional means are needed to support distributed interactive applications. Cloud computing on the other hand has been developed to deliver interactive applications and services in a scalable manner to cope with elasticity of demand for computing resources, exploiting economies of scale in multi-tenancy data centres. However today’s typical cloud-based applications tend to be deployed in a centralised manner and therefore struggle to deliver the performance required by more demanding, interactive and real-time services. Furthermore, deploying cloud resources in highly distributed network locations presents a much more complex problem than those faced in individual data centres or cloud infrastructures with only a handful of geographical locations.

Key Terms in this Chapter

Service-Centric Networking: a new networking architecture which aims at supporting the efficient provisioning, discovery and execution of service components distributed over the Internet combining network-level and service-level information.

Session Slot: a service and application independent metric identifying the number of simultaneous sessions that can be handled by a specific service instance or group of instances in an execution zone.

Orchestration Domain: an orchestration domain consists of an orchestration entity and one or more execution zones where the orchestrator may deploy and execute service instances. An orchestration domain may own the computing resources forming the execution zone or it may contract resources from a third party, e.g. a public cloud provider.

Service Resolution and Routing: the system responsible for maintaining and managing service routing information to create forwarding paths for queries/invocation requests from users and other service instances to be resolved or forwarded to execution zones containing available running instances of the specified service.

Service Instance: A single instantiation of an atomic or composite service running in an execution zone and identified by a service identifier. There will usually be multiple instances of the same service running in the same execution zone and across many execution zones, all identified by the same service identifier.

Service Placement: the mechanism for selecting appropriate execution zones to instantiate service instances.

Execution Point: the specific physical or virtual environment in which a service instance is deployed within an execution zone.

Execution Zone: a logical representation of a collection of physical computational resources in a specific location such as a data centre.

Evaluator Service: the computational entity running in an execution zone that is able to score and rate the execution zone and the execution points within that execution zone on various aspects of the service manifest – for example availability of specialised hardware, network and computational metrics for QoS estimation, the performance measured to nearby service instances.

Service Orchestrator: the entity responsible for service management functions including service registration, server placement, service lifecycle management and monitoring.

Complete Chapter List

Search this Book:
Reset