New IP: Enabling the Next Wave of Networking Innovation

New IP: Enabling the Next Wave of Networking Innovation

Richard Li, Uma S. Chunduri, Alexander Clemm, Lijun Dong
Copyright: © 2021 |Pages: 42
DOI: 10.4018/978-1-7998-7646-5.ch001
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Industrial machine-type communications (e.g., industrial internet), emerging applications such as holographic type communications, IP mobile backhaul transport for 5G/B5G (beyond 5G) for ultra-reliable low latency communications and massive machine-type communications, emerging industry verticals such as driverless vehicles, and future networking use cases as called out by ITU-T's focus group on network 2030 all require new networking capabilities and services. This chapter introduces “New IP,” a new data communication protocol that extends packet networking with new capabilities to support future applications that go beyond the capabilities that are provided by internetworking protocol (IP) today. New IP is designed to allow the user to specify requirements, such as expectations for key performance indicators' (KPIs) service levels, and other guidance for packet processing and forwarding purposes. New IP is designed to interoperate with existing networks in a straightforward manner and thus to facilitate its incremental deployment that leverages existing investment.
Chapter Preview
Top

Introduction

Current Internet technologies, with the Internet Protocol (IP) as its bedrock, have been a tremendous success. Originally devised as a way to interconnect computers over long distances in a highly resilient, decentralized manner that would be able to withstand outages of links and nodes, IP has evolved in a breathtaking fashion ever since. Today, the Internet interconnects not just a handful of computers but billions of devices (including “Things”) with an exorbitant volume of traffic growing at a breakneck pace. In addition, the Internet is able to support services that did not exist at the time of its conception – such as the World Wide Web or Social Media – and that would have been deemed impossible to support at the time – such as “real time” services like voice and interactive video. Throughout its evolution, the Internet has also been a driving force enabling network convergence, allowing previously separate custom networks to be integrated into a single infrastructure providing significant economies of scale. This made networks much more economical to operate, further contributing to their increasing pervasiveness.

It is thus fair to say that the Internet has come a long way, morphing from a niche novelty into a societal backbone that supports an ever-expanding set of services. Internet technology centers on IP (Internet Protocol) and the TCP/IP protocol suite in the data plane, as well as related mechanisms such as Quality of Service (QoS). While Internet technology has seen many extensions, at the core of its success are still its original principles based on best-effort, hop-by-hop forwarding. These principles do not provide any hard guarantees about whether packets will arrive within a certain time, or even in their original order, or even arrive at all, but they offer simplicity in terms of their implementation and resilience to perturbations experienced in the network. This does not mean that there have not been plenty of advances. Switching technologies like MPLS provide enforcement of homogenous traffic forwarding policies for each and every packet of a given flow. Traffic Engineering (TE) techniques like MPLS-TE with Path Computation Element (PCE) provide some guarantees about the compliance of a path with traffic requirements and fast-reroute capabilities at the network layer. However, none of these technologies is able to provide actual end-to-end latency gurantees or in general more Service Level Agreement (SLA) aware connectivity. In general it is left up to other layers to deal with any fallout of, for example, lost packets.

However, the question is whether the same principles will be applied forever, able to support any and all technical requirements of an ever-growing array of services and converging even more networks until all networking infrastructure can be consolidated, or whether foundational barriers will eventually be encountered that will prove hard if not impossible to overcome. As laid out later in this chapter, in recent years, a number of new requirements and networking use cases have begun to appear which cannot be readily supported by today’s Internet technologies. This is because they exhibit properties that cannot be remedied by simply layering new capabilities on top, even assuming that the issue of Internet ossification can be overcome. Note that Internet ossification refers to the increasing inability to add new services, capabilities, and functions to the existing Internet, due to a variety of factors from adversarial impact of middleboxes to standardization hurdles. The fact that many of these use cases open up the possibility for a whole new wave of innovation for networked applications and connected industries adds to the pressure for answers to the question of how the required capabilities might be supported in the future.

Key Terms in this Chapter

Qualitative Communications (QC): A communication mechanism that is meant to structure packet payloads into chunks of different priority, allowing for the selective dropping of chunks instead of packets as a whole when congestion is encountered. This mechanism is useful for the implementation of a form of dynamic compression useful for latency-sensitive applications.

Contract: A construct in a ‘New IP’ packet header that contains additional guidance for intermediate nodes along a path for how the packet should be processed. For example, a contract might contain a Service Level Objective for end-to-end latency for an in-time guarantee or a conditional directive to advise a forwarding decision depending on dynamic circumstances.

High-Precision Communications (HPC): Communication services that are able to deliver on stringent service level guarantees (such as packet loss, end-to-end latency, and throughput) with a very high degree of accuracy, making them suitable for mission-critical applications that have no tolerance for degradations in service levels.

New IP: A new network layer protocol and framework that is characterized by its built-in support for Free Choice Addressing, Qualitative Communications, and Contracts.

Operational Technology (OT): A technology, including networking technology, that addresses the needs of industrial applications, such as monitoring and control of industrial equipment. Such applications require very tight performance guarantees. OT is seen as a counterpart to IT (Information Technology), used by enterprises as part of their back-end office infrastructure that has less critical performance needs.

Service-Level Objective (SLO): A target value for a service level of a networking service that must be met. For example, SLO is used to characterize the requirement of a guaranteed service, for example, an in-time service.

Free-Choice Addressing: A ‘New IP’ mechanism that allows applications to use any addressing mechanism instead of being constrained to IPv4 or IPv6 addresses as the only option.

Holographic-Type Communications (HTC): Communication services that are able to transmit and stream holographic data across a network. HTC is characterized by its simultaneous support of very high throughput and very low latency across multiple concurrent and synchronized communications channels.

Complete Chapter List

Search this Book:
Reset