The ONL NPR Tutorial

NPR Tutorial >> Examples TOC

New Window?

Filters, Queues and Bandwidth

Errata

Contents

Introduction

[[ 2upd-flows.png Figure ]]

We demonstrate the interaction of two 200-Mbps UDP flows going through a 300 Mbps bottleneck link (port 4 of the left NPR). The flows are created by the iperf utility. First, we show how to direct both flows through a single, reserved queue. Then, we show how to direct each flow to their own reserved queue.

The new concepts demonstrated by this example include:

Preparation

In this example, two shell scripts are used to start traffic senders and receivers. You can get these by copying them from the directory /users/onl/export/NPR-Examples/ (after SSHing to the ONL user host onl.arl.wustl.edu) into the directory on the ONL user host where you store your executables and shell scripts (e.g., ~/bin/).

File Description
urcvrs-2npr Start iperf UDP servers (receivers) on hosts n2p2 and n2p3
usndrs-2npr Start iperf UDP clients (senders) on hosts n1p2 and n1p3

Although you can open up four windows, and SSH into the four hosts to run the iperf command, these scripts allow you to start the commands on the nodes from the ONL user host onl.arl.wustl.edu.

This example assumes that you have already created the SSH tunnel required by the RLI and started the RLI on your client host. It also assumes that you have a basic understanding of Unix commands. The example has two parts:

  1. Two UDP flows share a single queue at port 1.4
    Packets from UDP flows contend for the bottleneck output link in a FIFO manner.
  2. Two UDP flows use separate queues at port 1.4
    Packets from two UDP flows are placed in separate queues where each flow gets their share of the output link capacity. The user can specify the shares through the Queue Table at port 1.4. Separating the flows also smooths out the traffic bandwidth obtained by each UDP flow.
 

Example A: Two UDP Flows Share A Single Queue At Port 1.4

Steps (In Brief)

The main steps in Example A are:

  1. Commit a dumbbell configuration
  2. Install filters at ports 1.2, and 1.3
  3. Set the link capacity at port 1.4
  4. Configure queue 64 at port 1.4
  5. Create the bandwidth chart
  6. Create the queue length chart
  7. Create the packet drop chart
  8. Start the two UDP receivers
  9. Start the two UDP senders
  10. Terminate the UDP receivers

Steps (In Detail)

  1. Commit a dumbbell configuration
    Create the dumbbell topology shown above with default route tables. The RLI example given earlier shows the steps in detail.
  2. [[ Dumbbell Figure]]
  3. Install filters at ports 1.2, and 1.3 to direct all UDP packets to a output single queue
  4. Put all UDP packets arriving to either port 1.2 or 1.3 into queue 64, a reserved flow queue. The packets in queue 64 will be serviced in FIFO order. Because packets do not arrive in an alternating manner even when sent at the same rate, the bandwidth of each flow will vary over time. If we do not install this filter, the packets would go into one or two of the 64 datagram queues (numbered from 0 to 63) based on a hash function over some of the bits in their IP headers.
    Remember: Queues 0-63 are datagram queues, and queues 64-8192 are reserved queues. By default, packets are placed into a datagram queue determined by a stochastic fair queueing algorithm (a hash function applied over IP header fields). The only thing special about queue 64 is that it is a reserved queue.
  5. Set the (outgoing) link capacity at port 1.4 to 300 Mbps
    Watch Out!!! This does NOT limit the incoming traffic rate, only the outgoing rate.

  6. Configure queue 64 at port 1.4
    Output queues have two parameters. The threshold is the size of the queue in bytes; i.e., any packet that can not fit into the queue is dropped. The quantum is the Deficit Round Robin (DRR) credit parameter which determines the queue's share of the output link capacity. Since all UDP packets will be put into queue 64, this parameter is irrelevant at this point.
  7. [[ Add Queue Figure ]]
  8. Create the bandwidth chart
    Now, create three charts: 1) Bandwidth, 2) Queue Length, and 3) Packet Drops. This step shows you how to create the Bandwidth chart which will show the traffic bandwidth coming from the two sources and the traffic bandwidth going to the two receivers by monitoring the byte counters inside NPR.
  9. Create the queue length chart
    Monitor congestion at the bottleneck port by monitoring the length of queue 64 at port 1.4.
  10. Create the packet drop chart
    Create another chart to display the number of packets dropped by the Queue Manager at port 1.4. This count is kept in register 31 (see below and Summary Information => Counters).
  11. Start the two UDP receivers
    Start iperf UDP servers on n2p2 and n2p3 to act as receivers by running the urcvrs-2npr shell script from the onlusr host. These receivers will run as background proceses until we kill them. The next step will show you how to start two iperf UDP clients on n1p2 and n1p3 to act as the traffic senders.
  12. Start the two UDP senders
    Now that the two receivers are running, start sending traffic from the two senders at n1p2 and n1p3 using the usndrs-2npr shell script.
  13. Terminate the UDP receivers
  14.  

    Example B: Two UDP Flows Use Separate Queues At Port 1.4

    This part repeats the experiment in Example A but with the two flows going to different queues at port 1.4: traffic from n1p2 goes into queues 65 and traffic from n1p3 goes into queues 64. The packet scheduler will divide the 300 Mbps link capacity at port 1.4 according to the quantum values in the Queue Table. In the simplest case where both flows get an equal share of the link capacity, both flows will get 150 Mbps during congestion periods but with less variation than when both flows were placed into the same queue. The example also shows how to give the flow from n1p2 twice as much bandwidth (200 Mbps) as the one from n1p3 (100 Mbps).

    Steps (In Brief)

    1. Direct the two flows into separate queues at port 1.4
    2. Configure queue 65 at port 1.4
    3. Add queue 65 to the Queue Length chart
    4. Start the UDP receivers and senders
    5. Change the bandwidth shares at port 1.4
    6. Terminate UDP receivers

    Steps (In Detail)

    The main steps in Example B are:

    1. Direct the two flows into separate queues at port 1.4
      Direct packets from n1p2 to queue 65 and those from n1p3 to queue 64. Since we have already installed the filter at port 1.3, we only need to install one at port 1.2.
    2. [[ filter-1.2-q65.png Figure ]]
    3. Add and configure queue 65 at port 1.4
      Configure queue 65 to have the same properties as queue 64: a threshold of 1500000 bytes and a quantum of 1,500.
    4. [[ add-queue-1.4-q65.png Figure ]]
    5. Add queue 65 to the Queue Length chart
      The steps are identical to the ones for queue 64.
    6. Start the UDP receivers and senders
      The procedure was described in Example A.
    7. Change the bandwidth shares at port 1.4
      Change the bandwidth shares at port 1.4 so that the flow from n1p2 gets twice as much bandwidth (200 Mbps) as the one from n1p3 (100 Mbps).
    8. Terminate UDP receivers
      The procedure was given in Example A.
     

    Common Problems/Mistakes

    Here are some common problems/mistakes:

     

    Test Your Understanding

    Tipping Point Experiments

    [[ port-rate-1.4-412Mbps.png Figure ]]

    Start with the configuration for Example A where both UDP flows go to queue 64 at port 1.4. Repeat the experiment in Example A with the port rate at port 1.4 set to 412 Mbps.

    Bandwidth Sharing Experiments

    Add hosts at ports 1.1 and 2.4 and modify the experiment in Example A so that the bandwidths through the bottleneck link at port 1.4 are in the ratio 1:2:3 for traffic from ports 1.1, 1.2, and 1.3 respectively.

    
     Revised:  Wed, Sep 17, 2008 
    
      
      

    NPR Tutorial >> Examples TOC