Managing Network Quality of Service in Current and Future Internet

Managing Network Quality of Service in Current and Future Internet

Mark Yampolskiy, Wolfgang Fritz, Wolfgang Hommel
DOI: 10.4018/978-1-4666-1888-6.ch008
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this chapter, the authors discuss the motivation, challenges, and solutions for network and Internet quality of service management. While network and Internet service providers traditionally ensured sufficient quality by simply overprovisioning their internal infrastructure, more economic solutions are required to adapt the network infrastructures and their backbones to current and upcoming traffic characteristics and quality requirements with sustained success. The chapter outlines real-world scenarios to analyze both the requirements and the related research challenges, discusses the limitations of existing solutions, and goes into the details of practitioners’ current best practices, promising research results, and the upcoming paradigm of service level management aware network connections. Special emphasis is put on the presentation of the various facets of the quality assurance problem and of the alternative solutions elaborated with respect to the technical heterogeneity, restrictive information sharing policies, and legal obligations encountered in international service provider cooperation.
Chapter Preview
Top

Motivation

Referring to the Internet as information superhighway is an old and outworn, but still valid metaphor. If we need to drive from A to B, we expect a road in good shape, do not want to make any major detour and try to avoid any traffic jams. Most drivers will prefer roads that can be used free of charges, motorcyclists typically prefer a nice scenery, and fleet managers want to make sure that all of the cargo that is split among several trucks arrives in time. Commuters drive to their workplace, families go on holidays, truck drivers feel at home on the streets, and some people just drive for fun. Sometimes we are in a hurry, sometimes it feels OK to be a bit late, and sometimes there are accidents.

The situation on the Internet is quite similar: We need to get emails, files, or any other types of data packets from A to B. No matter whether we use the Internet as a part of our professional or our private life, we expect our Internet connection to be of “high quality”. But what does quality mean in this context? While most users would use characteristics like fast and reliable, research and industry have agreed on quite a large set of so-called quality-of-service (QoS) parameters in the past decades, including bandwidth, delay, jitter (delay variation), packet loss rate, etc. Then, with the professionalization of network and Internet service providers, the demand for contracts and service level agreements (SLAs) has risen: It no longer was only important which characteristics a working connection had, but also which network availability a provider could guarantee, how fast any technical failures could be repaired, and how long planned downtimes and maintenance windows would take. To avoid that customers and providers start to dispute about discrepancies in how each of them thought those SLAs were met by the provider, neutral and objective measurement and monitoring criteria and systems became necessary. It only took a few years until a previously technical-level-only issue turned into a complicated multi-billion dollar business that depends on comprehensive network QoS management concepts and tools.

However, there is only one Internet, and just like the same road that is used by motorcycles, cars, busses, and trucks with 50 tons of cargo, the Internet uplink we have at home must be used for all types of data, whether it is a short email, a huge file upload, or a feature-length video-on-demand movie download. Yet, unlike the road we would drive on, we want to finish the file upload to Australia during watching the live news-feed from the United States, all the while sitting in front of a PC somewhere in Europe. With multiple services providers in various countries involved, both the heterogeneity of the involved technical components and the complexity of legal and organizational constraints explode. The lack of contracts – and therefore of some type of trust – between all involved parties, as well as the risk of abuse, e.g., by users claiming that a huge file upload is an important voice-over-IP phone call in order to obtain a better quality connection, Internet QoS management is a very hard to achieve and challenging task.

One of the major challenges thus is that requirements differ a lot depending on the actual use case or application the Internet is used for. In the consumer area, applications like telephony and online gaming have quite low traffic and bandwidth demands, but require very good availability and latency / jitter characteristics; on the other hand, video-on-demand traffic, which requires considerably higher bandwidth, is currently expected to raise by about 500% within the next three years. In industry and research, however, we have even further increased demands for bandwidth and availability. For example, the large hadron collider at CERN in Geneva, Switzerland, conducts physics experiments that produce about 15 petabytes (i.e., thousands of terabytes) of raw data each year. This data must be processed and analyzed by physicists around the globe and is thus being split and transported to several dozens of higher education, academic, and research institutions worldwide. While a larger delay is perfectly fine for those connections, their throughput and availability are crucial for the overall success of this huge long-term project.

Complete Chapter List

Search this Book:
Reset