Networks are designed to handle a certain amount of traffic with an acceptable level of network performance. Network performance will deteriorate if the offered traffic exceeds the given network capacity. Packets will suffer long queuing delays at congested nodes and possibly packet loss if buffers overflow. Traffic control within the network regulates traffic flows for the purpose of maintaining adequate network performance during conditions of congestion. The Internet currently has a simple approach—dropping IP packets during congestion—but will evolve to more sophisticated traffic controls in the future.
Key Terms in this Chapter
Explicit Congestion Notification: A proactive congestion avoidance method involving routers or switches conveying congestion information to hosts via fields in packet headers.
Integrated Services (IntServ): An IETF framework supporting QoS based on RSVP reservations per flow.
Congestion: a state of the network where an excessive amount of offered traffic causes serious degradation of network performance.
Admission Control: A procedure for the network to accept or block new traffic flows before the flows start, usually by means of a signaling or resource reservation protocol.
Differentiated Services (DiffServ): An IETF framework supporting QoS classes by means of Diffserv code point marking, per hop behaviors (PHBs), and stateless core routers.
Access Policing: Regulation of ingress traffic at the network boundary, usually by a leaky bucket algorithm.
Quality of Service (QoS): End-to-end network performance defined from the perspective of a specific user’s connection.
Active Queue Management (AQM): Intelligent buffer management strategies, such as random early detection (RED), enabling traffic sources to respond to congestion before buffers overflow.