Checkout

Cart () Loading...

    • Quantity:
    • Delivery:
    • Dates:
    • Location:

    $

Quality of Service, Part 9 – FIFO Queuing

Date:
Feb. 09, 2010
Author:
Guest Authors

In part 8 of this blog series congestion management and its four main queuing methods were explored. This post will look at the first of four queuing methods: First In First Out (FIFO) queuing.

To refresh our memories, congestion can occur anywhere within a network, such as sections of the network that have speed mismatches, aggregations points, or confluences. Queuing algorithms are used to manage congestion. There are many different algorithms to serve multiple needs. The ultimate goal is to provide bandwidth and delay guarantees in order to prioritize the network traffic.

Speed mismatches occur when traffic moves from one speed network (such as a 100Mbps LAN) to another network with a different speed (Such as a 10Mbps LAN). This could be a LAN-to-LAN or LAN-to-WAN mismatch; however, usually it is a high speed to a low speed connection.

Aggregation congestion occurs when multiple connections feed into one main connection. A typical situation would be when multiple remote sites connect into one central site. For example if you have twenty remote sites with 256K WAN links that all connect to the headquarters via a single T1 connection, it would be very easy for the T1 connection to become oversubscribed.

Queuing Components
Queuing accommodates bursts on the router when the arrival rate of packets is greater than the departure rate. Two main reasons for bursts are the cause of queuing to be used:


  1. Input interface is faster than the output interface

  2. Output interface is receiving packets coming in from multiple other interfaces


Queuing is split into two parts:

  1. Hardware queue: Uses FIFO , and is sometimes called the transmit queue (TxQ)

  2. Software queue: Schedules packets into the hardware queue based on the QoS settings


A full hardware queue indicates interface congestion and software queuing is used to manage it. When a packet is being forwarded, the router will bypass the software queue if the hardware queue is not full.
First In First Out(FIFO)  Queuing

FIFO queuing algorithm is the simplest of the congestion management methods. All packets are treated equally, and placed into a single queue and serviced the order they were received. Hence the name FIRST-IN FIRST-OUT. FIFO queuing offers the following benefits:


  • FIFO is supported on all Cisco Platforms

  • FIFO queuing is supported in all version of Cisco IOS

  • FIFO queuing places an extremely low load on the system when compared with other queuing mechanisms.

  • FIFO queuing is predictable, and delay is determined by the maximum depth of the queue.

  • FIFO queuing does not add significant queuing delay at each hop as long and the queue depth remains low.


FIFO queuing also poses the following limitations:

  • FIFO queuing is extremely unfair when an aggressive flow contests with a time sensitive flow. Aggressive flows send a large number of packets, many of which are dropped. Time sensitive flows send a modest number of packets and most are dropped due to the queue always being full of aggressive flow packets. This behavior is called starvation.

  • When the Queue is full, packets entering the queue have to wait longer. When the queue is not full, packets entering the queue do not have to wait as long as when the queue was full. Variation in delay is called Jitter.

  • A single FIFO queue does not allow routers to organize buffered packets and service one class of traffic differently from other classes of traffic.

  • During periods of congestion, FIFO queuing benefits UDP flows over TCP flows. When experiencing packet loss due to congestion, TCP based applications reduce their transmission rate. UDP based applications remain oblivious to packet loss and continue transmitting packets at their usual rate. Because TCP based applications slow their transmission rate to adapt to changing network conditions, FIFO queuing can result in increased delay, jitter, and a reduction in the amount of output bandwidth consumed byTCP applications.


Author: Paul Stryer

References