The ONL Tutorial

Tutorial >> Examples TOC

Filters, Queues and Bandwidth

We first demonstrate the interaction of two 200-Mbps UDP flows going through a 300 Mbps bottleneck link. Then, repeat the experiment with TCP flows and then TCP flows with 50 millisecond delays along the ACK path. In all cases, we use the one-NSP configuration shown below with two traffic sources coming from n1p2 and n1p3, going through the bottleneck link at port 7 and finally on to n1p4 and n1p5 respectively. With TCP traffic, the ACK path is the reverse path taken by the data packets.

[[ X ]]

Preparation

The table below shows the shell scripts used to start traffic senders and receivers in this example. Before beginning this example, you should copy the files to the directory on the ONL user host where you store your executables and shell scripts (e.g., ~/bin/). You can do that by either selecting the each link below to get the files and then copying them to the appropriate place. Alternatively, you can copy them directly from the directory /users/onl/export/Examples/Filters,_Queues_and_Bandwidth/ after SSHing to ONL user host onl.arl.wustl.edu.

File Link to File
1-NSP UDP Traffic Scripts urcvrs-1nsp , usndrs-1nsp
1-NSP TCP Traffic Scripts trcvrs-1nsp , tsndrs-1nsp

This example has four parts. We begin with a basic experiment involving two UDP flows passing through a bottleneck link at port 7 and then show three variations involving packet scheduling, TCP traffic, and packet delay emulation. It assumes that you have already created the SSH tunnel required by the RLI and started the RLI on your client host.

  1. Two UDP flows sharing a single queue at port 7
    Packets from UDP flows contend for the bottleneck output link in a FIFO manner.
  2. Two UDP flows using separate queues at port 7
    Packets from two UDP flows are placed in separate queues where each flow gets their share of the output link capacity. The user can specify the shares through the Queue Table at port 7. Separating the flows also smooths out the traffic bandwidth obtained by each UDP flow.
  3. Two TCP flows using separate queues at port 7
    We switch to the TCP traffic scripts and show how TCP shows bandwidth behavior that is characteristic of TCP's congestion avoidance algorithm.
  4. Delay TCP ACK packets at port 6
    We delay the TCP ACK packets by installing a delay plugin and an egress GM filter at port 6.

Part A: Two UDP flows sharing a single queue at port 7

Steps (In Brief)

Here are the major steps in this example:

  1. Commit a 1-NSP dumbbell configuration
  2. Setup the route tables at ports 2-5
  3. Install an egress GM filter at port 7 to direct all packets to a single queue
  4. Set the link bandwidth at port 7
  5. Configure egress queue 300 at port 7
  6. Create the bandwidth chart
  7. Create the queue length chart
  8. Create the packet drop chart
  9. Start the two UDP receivers
  10. Start the two UDP senders
  11. Stop the UDP senders

Steps (In Detail)

  1. Commit a 1-NSP dumbbell configuration
    Create the 1-NSP dumbbell topology shown on the The_Remote_Laboratory_Interface page.

  2. [[ network.png ]]
  3. Setup the route tables at ports 2-7
  4. Ports Next Hop
    2, 3 7
    4, 5 6
    6, 7 Default Forwarding
    Traffic from the sources at ports 2 and 3 should go to port 7 where it loops back into port 6. From port 6, traffic goes to the appropriate destination (port 4 or 5). Return TCP traffic (ACK packets) takes the reverse path. We create a single entry in the route tables at ports 2-3 to direct traffic to port 7 and a single entry in the route tables at ports 4-5 to direct return traffic to port 6. At ports 6 and 7, we create default entries.

    For port 2: [[ p2_route_table-menu.png ]]

    [[ p2_route_table-resize.png ]]
    [[ p2p5_route_tables-resize.png ]]
    
    	
    For ports 3-5:
    [[ p6_route_table-resize.png ]] For ports 6 and 7: We use default routing.
  5. Install an egress GM filter at port 7 to direct all packets to a single queue
  6. We will put all packets arriving to the egress side of port 7 into queue 300. So, packets from both UDP flows will be serviced from queue 300 in FIFO order. Because packets do not arrive in an alternating manner even when sent at the same rate, the bandwidth of each flow will vary over time. If we do not install this GM filter, the packets would go into one or two of the 64 datagram queues based on a hash function over some of the bits in their IP headers. By sending all packets to one queue, we make it easier to observe the queue length but harder to guarantee bandwidth.
  7. Set the link bandwidth at port 7

  8. Configure egress queue 300 at port 7
    Egress queues have are two parameters. The threshold is the size of the queue in bytes; i.e., any packet that can not fit into the queue is dropped. The quantum is the Deficit Round Robin (DRR) credit parameter which determines the queue's share of the output link capacity. Since all packets will be put into queue 300, this parameter is irrelevant at this point.
  9. [[ p7_q300-resize.png ]]
  10. Create the bandwidth chart
    We will now create three charts: 1) Bandwidth, 2) Queue Length, and 3) Packet Drops. This step shows you how to create the Bandwidth chart which will show the traffic bandwidth coming from the two sources and the traffic bandwidth going to the two receivers by monitoring the counters inside the ingress side of the ATM switch.
  11. Create the queue length chart
    We monitor the congestion (queueing) at egress port 7 by monitoring the length of queue 300.
  12. [[ p7_egress_qlength-resize.png ]]
  13. Create the packet drop chart
    We monitor the number of packets arriving to the FPX at the egress side of port 7 that are being dropped.
  14. [[ p7_egress_drops-menu-resize.png ]]
  15. Start the two UDP receivers
    We need to start iperf UDP servers at n1p4 and n1p5 to act as receivers. These receivers will run as background proceses until we kill them. After starting the receivers, we can start the two iperf UDP clients to act as the traffic senders. We will use the urcvrs-1nsp shell script to start both receivers and the usndrs-1nsp shell script to start both senders. Furthermore, we will run these scripts on the onlusr host which means that they will remotely launch the receivers and senders.
  16. Start the two UDP senders
    Now that the two receivers are running, we can start sending traffic from the two senders at n1p2 and n1p3. We will run the
  17. Stop the UDP senders
    The sender shell script periodically starts new UDP iperf senders and will continue indefinitely unless the user kills the sender script.

Part B: Two UDP flows using separate queues at port 7

This part repeats the experiment but with the two flows going to different egress queues at port 7 (queues 300 and 301). The packet scheduler will divide the 300 Mbps link capacity at port 7 according to the quantum values in the Queue Table. In the simplest case where both flows get an equal share of the link capacity, both flows will get 150 Mbps during congestion periods but with less variation than when both flows were placed into the same queue.

Steps (In Brief)

  1. Direct the two flows into separate queues at port 7
  2. Configure egress queue 301 at port 7
  3. Add queue 301 to the Queue Length chart
  4. Start the two UDP senders
  5. Stop all UDP senders and receivers

Steps (In Detail)

Here are the major steps in this example:

  1. Direct the two flows into separate queues at port 7
    We will direct packets from n1p2 to queue 300 and those from n1p3 to queue 301.
  2. [[ p7_gm_filter-2queues-resize.png ]]
  3. Configure egress queue 301 at port 7
    We configure queue 301 to have the same properties as queue 300: a threshold of 150000 bytes and a quantum of 2,048.
  4. [[ p7_queue_table+q301-resize.png ]]
  5. Add queue 301 to the Queue Length chart
  6. Start the two UDP senders
  7. Stop all UDP senders and receivers
    In prepartion for repeating the experiment with TCP instead of UDP traffic, we need to terminate all of our iperf processes. The following sequence of commands can be put into a shell script and run from onlusr:
  8. 	onlusr> ssh $n1p2 pkill -n iperf
    	onlusr> ssh $n1p3 pkill -n iperf
    	onlusr> ssh $n1p4 pkill -n iperf
    	onlusr> ssh $n1p5 pkill -n iperf
    
The above pkill commands will kill all of your most recent processes named iperf.

Part C: Two TCP flows using separate queues at port 7

We can use the same setup for TCP traffic. Our setup will use the same forward path and use the reverse path for ACK packets sent by the receivers; i.e., ACK packets will go to port 6, then out the link back into port 7, and finally to the senders. The three charts will look different than those for UDP flows because of TCP's slow-start and congestion avoidance phases.

Steps (In Brief)

  1. Start the two TCP receivers
  2. Start the two TCP senders
  3. Stop the two TCP senders

Steps (In Detail)

Here are the major steps in this example:

  1. Start the two TCP receivers in the background
  2. onlusr> trcvrs-1nsp >& log &
    onlusr> cat log.txt
    
  3. Start the two TCP senders
  4. tsndrs-1nsp
    
    [[ bw-tcp-2q-chart.png ]]

Part D: Delay TCP ACK packets at port 6

We delay ACK packets by 50 milliseconds by installing a delay plugin at egress port 6. Here are a couple of things to keep in mind:

Although the delay plugin accepts control messages that allow the user to change the amount of delay, we will use the default delay in this example.

Steps (In Brief)

  1. Install a delay plugin at egress port 6
  2. Install an egress GM filter to direct packets to the delay plugin at port 6
  3. Start the two TCP senders
  4. Stop all of the TCP senders and receivers

Steps (In Detail)

  1. Install a delay plugin at egress port 6
  2. [[ p6_plugin_table-menu-resize.png ]]
  3. Install an egress GM filter to direct packets to the delay plugin at port 6
  4. [[ p6_plugin_table-resize.png ]]
  5. Start the two TCP senders
  6. tsndrs-1nsp
    
  7. Stop all of the TCP senders and receivers
  8. 	onlusr> ssh $n1p2 pkill -n iperf
    	onlusr> ssh $n1p3 pkill -n iperf
    	onlusr> ssh $n1p4 pkill -n iperf
    	onlusr> ssh $n1p5 pkill -n iperf
    

 Revised: Fri, Jan 26, 2007 

  
  

Tutorial >> Examples TOC