Parameter Setting and Stability of PI Controller for AQM Router

Parameter Setting and Stability of PI Controller for AQM Router

Prasant Kumar Dash, Sukant Kishoro Bisoy, Narendra Kumar Kamila, Madhumita Panda
DOI: 10.4018/978-1-5225-0501-3.ch015
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

AQM is used to signal congestion early by dropping packets before the buffer becomes completely full. It has following goals: small queuing delays, low packets losses and high link utilization. This work investigates the dynamic of Proportional Integral (PI) controller in the presence of large FTP flows in homogeneous scenario. We systematically derive quantitative guidelines and set the parameters as a function of the scenario parameters bottleneck-bandwidth, round-trip-time, number of TCP flows in order to avoid severe oscillation of the queue size. To compare the best of PI we have considered Random Early Detection (RED), Random Exponential Marking (REM) and Adaptive Virtual Queue (AVQ) as active queue management schemes. Then we analyzed the performance of PI with DropTail, REM and AVQ in wired-cum-wireless network. Our result shows that, properly configuring the parameter of PI can stabilize queue length under dynamic environments and retain good performance even if the network parameters, such as the number of flows N, target queue length, round-trip time RTT.
Chapter Preview
Top

Introduction

Congestion control is the important issued in Internet. Congestion results in long delay in data delivery and wasting of resources due to lost or dropped packets. The primary role of a router is to switch packets from the input links to output links through buffer. Apart from forwarding the packets, routers are involved for controlling the congestion in the network. It is known from (Sally Floyd, 1993) that routing algorithms focus on two main concepts namely queue management and scheduling. Queue management algorithms manage the length of packet queues by dropping packets whenever necessary whereas scheduling algorithms determine which packets to be sent next. These algorithms are used primarily to manage the allocations of bandwidth among various flows. New trends in communication, especially the deployment of multicast and real time audio/ video streaming applications, are likely to increase the percentage of non –TCP traffic in the internet (University of Southern California, 1981). They don’t share the available bandwidth fairly with applications built on TCP, such as Web browsers, FTP or e-mail clients. The Internet community strongly fears that the current evolution could lead to congestion collapse and starvation of TCP traffic. TCP can detect packet drops and interpret them as indications of congestion in the network. TCP sender will react to these packet drops by reducing their sending rates. This reduction in sending rate translates into a decrease in the incoming packet rate at the router, which effectively allows the router to clear up its queue. When the incoming packet rate is higher than the router’s outgoing packet rate, the queue size will gradually increase and queue becomes full at one stage. The traditional technique for managing queue lengths is to set a threshold (in terms of packets) for each queue, accepts packets for the queue until the threshold is reached, then reject (drop) subsequent incoming packets until the queue decreases below the value of threshold. This technique is known as “tail drop”, since packet that arrived most recently is dropped when the queue is full. Baren B.et al pointed out two important drawbacks namely, Lock-Out and Full Queues. In some situations tail drop allows a single connection or a few flows to monopolize queue space, preventing other connections from getting room in the queue. This “Lock-Out” phenomenon is often due to the result of synchronization or other timing effects. The tail drop discipline allows queue to maintain a full status for long periods of time, since tail drop signals congestion only when the queue has become full. It is important to reduce the steady- state queue size and this is perhaps the queue management’s most important goal. In short, queue limits should not reflect the steady state queues; instead they have to reflect the size of the bursts needs to be absorbed. Besides tail drop, B.Braden et al. considered two alternative queue disciplines that can be applied when the queue becomes full. They are “random drop on full” (W.Feng, 1999). Both disciplines solve lock out problem but neither of them solves full queues problem. In Internet, dropped packets serve as a critical mechanism of congestion notification to end nodes. The solution to the full queues problem is for routers to drop packets before a queue becomes full, so that end nodes can respond to congestion before buffers overflow. Such approach is called as “Active Queue Management (AQM)” (Braden, B., 1998) which is discussed elaborately in RFC2309.By dropping packets before buffers overflow, AQM allows routers to control packet drops. By keeping the average queue size small, queue management will reduce the delays seen by flows. This is particularly important for interactive applications whose subjective (and objective) performance is better when the end-to-end delay is low. Active queue management can prevent lock-out behavior by ensuring that there will almost always be a buffer available for an incoming packet. It can also prevent a router bias against low bandwidth for highly bursty flows.

Complete Chapter List

Search this Book:
Reset