Deficit weighted round robin queuing

From Hill2dot0
(Redirected from DWRR)
Jump to: navigation, search
DWRR queuing

Because WFQ emphasizes ease of use over granularity, it is not ideal for situations where precise control is needed for certain traffic flows. Deficit weighted round robin (DWRR) queuing allows a network administrator to group traffic into classes. A “class” is an entity the network administrator defines to receive distinct treatment in the queue. A class can be as refined as a single flow, perhaps only for the VoIP conversations of the CEO, or a broad group of flows, such as all UDP traffic or even all IP traffic. In other words, where WFQ classifies traffic per session, DWRR uses administrator-defined traffic classes, which are less granular but more application-specific.

Each class is given its own queue. Like weighted round robin (WRR) each queue is serviced proportionately to its weighting. However, the weighting operates at the byte or bit level. In fact, each queue can be considered to have a token bucket associated with it. The token accumulation is related to the percentage allocated to that queue. When the queue is visited by the scheduler, packets are transmitted from the queue as long as there are sufficient tokens in the bucket. When the next packet in the queue exceeds the remaining tokens in the bucket, the scheduler moves on to the next queue.

Consider the case of a 100 Mbps transmission facility, and three queues that have been allocated 50 percent, 30 percent, and 20 percent. We will call these the high (yellow), medium (red) and low (blue) queues, respectively. This means the yellow queue is accumulating tokens at 50 Mbps, the red queue at 30 Mbps, and the blue queue at 20 Mbps. There is also usually a defined maximum queue depth that is likewise proportionate. In our example, these limits are 50 kilobits, 30 kilobits, and 20 kilobits, respectively.

Let's assume the network is idle and the queues have all filled their buckets. Suddenly a flood of packets arrives impacting all queues. The scheduler visits the yellow queue, where the packets are all 1200 bits long. Packets are sent from this queue (over 40 of them) until the number of tokens in the bucket drops below 1200, at which point there are not enough tokens for the next packet, so the scheduler moves on to the red queue. Here the packets are all 6000 bits long. After the third packet, there are not enough tokens left to transmit the fourth, so the scheduler moves on. In the final queue the packets are around 10,000 bits long. After two of these the token bucket is exhausted, so the scheduler returns to the yellow queue. While the other two queues were being serviced, this queue was accumulating tokens, so transmission can begin again.

When one of the queues becomes empty, it is removed from the algorithm and the token rates for the other buckets are adjusted proportionately, ensuring that all of the bandwidth of the facility is fully utilized. If only one queue has traffic, it easily gets all of the bandwidth.

DWRR manages all elements of queuing in one technology that is (relatively) easy to use, yet powerful enough to answer granular QoS needs. While it can do this, DWRR does suffer from one drawback. Since it employs a weighted fair queue, some very delay-sensitive traffic, like voice, can still be subject to jitter as each of the variable queues are serviced.

PodSnacks

<mp3>http://podcast.hill-vt.com/podsnacks/2007q4/dwrr.mp3%7Cdownload</mp3> | Deficit Weighted Round Robin (DWRR)