Tutorial >> Examples | TOC |
We first demonstrate the interaction of two 200-Mbps UDP flows going through a 300 Mbps bottleneck link. Then, repeat the experiment with TCP flows and then TCP flows with 50 millisecond delays along the ACK path. In all cases, we use the one-NSP configuration shown below with two traffic sources coming from n1p2 and n1p3, going through the bottleneck link at port 7 and finally on to n1p4 and n1p5 respectively. With TCP traffic, the ACK path is the reverse path taken by the data packets.
The table below shows the shell scripts used to start traffic senders and receivers in this example. Before beginning this example, you should copy the files to the directory on the ONL user host where you store your executables and shell scripts (e.g., ~/bin/). You can do that by either selecting the each link below to get the files and then copying them to the appropriate place. Alternatively, you can copy them directly from the directory /users/onl/export/Examples/Filters,_Queues_and_Bandwidth/ after SSHing to ONL user host onl.arl.wustl.edu.
File | Link to File |
---|---|
1-NSP UDP Traffic Scripts | urcvrs-1nsp , usndrs-1nsp |
1-NSP TCP Traffic Scripts | trcvrs-1nsp , tsndrs-1nsp |
This example has four parts. We begin with a basic experiment involving two UDP flows passing through a bottleneck link at port 7 and then show three variations involving packet scheduling, TCP traffic, and packet delay emulation. It assumes that you have already created the SSH tunnel required by the RLI and started the RLI on your client host.
Packets from UDP flows contend for the bottleneck output link in a FIFO manner.
Packets from two UDP flows are placed in separate queues where each flow gets their share of the output link capacity. The user can specify the shares through the Queue Table at port 7. Separating the flows also smooths out the traffic bandwidth obtained by each UDP flow.
We switch to the TCP traffic scripts and show how TCP shows bandwidth behavior that is characteristic of TCP's congestion avoidance algorithm.
We delay the TCP ACK packets by installing a delay plugin and an egress GM filter at port 6.
Here are the major steps in this example:
Create the 1-NSP dumbbell topology shown on the The_Remote_Laboratory_Interface page.
When done, the links will become solid and the host icons will turn to a dark blue.
Ports | Next Hop |
---|---|
2, 3 | 7 |
4, 5 | 6 |
6, 7 | Default Forwarding |
Traffic from the sources at ports 2 and 3 should go to port 7 where it loops back into port 6. From port 6, traffic goes to the appropriate destination (port 4 or 5). Return TCP traffic (ACK packets) takes the reverse path. We create a single entry in the route tables at ports 2-3 to direct traffic to port 7 and a single entry in the route tables at ports 4-5 to direct return traffic to port 6. At ports 6 and 7, we create default entries.
For port 2:
This step can be postponed until all route tables have been specified.
For ports 3-5:
We will put all packets arriving to the egress side of port 7 into queue 300. So, packets from both UDP flows will be serviced from queue 300 in FIFO order. Because packets do not arrive in an alternating manner even when sent at the same rate, the bandwidth of each flow will vary over time. If we do not install this GM filter, the packets would go into one or two of the 64 datagram queues based on a hash function over some of the bits in their IP headers. By sending all packets to one queue, we make it easier to observe the queue length but harder to guarantee bandwidth.
This CIDR address will match any IP address.
Although we know that we will be sending UDP packets, we can use this same filter if we send TCP packets or ICMP (ping) packets.
Egress queues have are two parameters. The threshold is the size of the queue in bytes; i.e., any packet that can not fit into the queue is dropped. The quantum is the Deficit Round Robin (DRR) credit parameter which determines the queue's share of the output link capacity. Since all packets will be put into queue 300, this parameter is irrelevant at this point.
You can use this configuration file as a baseline for variations to the first experiment. Later, you can add monitoring charts to this same configuration file or save them in separate monitoring files which can be loaded as needed.
We will now create three charts: 1) Bandwidth, 2) Queue Length, and 3) Packet Drops. This step shows you how to create the Bandwidth chart which will show the traffic bandwidth coming from the two sources and the traffic bandwidth going to the two receivers by monitoring the counters inside the ingress side of the ATM switch.
The VOQ Bandwidth chart will appear.
This moves the monitoring parameter focus to this chart. Since there is only one chart, this step was actually unnecessary.
The Add Parameter dialogue box will appear.
We will accept the default 1 second polling rate (period). The VOQ Bandwidth chart will appear with one line showing no traffic and labeled IPP 2 BW to OPP 7. We will now change the label to the more meaningful name 1.2 to 1.7 indicating bandwidth from NSP 1, port 2 to NSP 1, port 7.
A dialogue box will appear
This dialogue box indicates that a rate (bandwidth) is being computed from reading the number of ATM cells going from port 2 to port 7 of the ATM switch every 1.0 second.
This will save the monitoring configuration to a file separate from the topology file created earlier. This approach allows us to dynamically combine different monitoring options with the topology file saved earlier. Alternatively, you could have incorporated the monitoring configuration into the topology file by using the RLI Window and enterring File => Save .
We monitor the congestion (queueing) at egress port 7 by monitoring the length of queue 300.
The Queue Length chart will appear.
The Queue Length chart will appear.
We monitor the number of packets arriving to the FPX at the egress side of port 7 that are being dropped.
The P7Counter66 dialogue box will appear.
This dialogue box indicates rate or absolute number of packets being dropped at the egress side of the FPX at port 7 every 1.0 second.
We need to start iperf UDP servers at n1p4 and n1p5 to act as receivers. These receivers will run as background proceses until we kill them. After starting the receivers, we can start the two iperf UDP clients to act as the traffic senders. We will use the urcvrs-1nsp shell script to start both receivers and the usndrs-1nsp shell script to start both senders. Furthermore, we will run these scripts on the onlusr host which means that they will remotely launch the receivers and senders.
remote> ssh onl.arl.wustl.edu onlusr> urcvrs-1nsp >& log & onlusr> cat log.txt
The UDP iperf servers start running and wait for UDP iperf traffic on the internal network interfaces n1p4 and n1p5 respectively. The output shows that both servers are listening on UDP port 5001.
onlusr> source ~onl/.topology # use .topology.csh if running a c-shell onlusr> ssh $n1p4 ps -lC iperf onlusr> ssh $n1p5 ps -lC iperf
Now that the two receivers are running, we can start sending traffic from the two senders at n1p2 and n1p3. We will run the
usndrs-1nsp
The sender script staggers the starting time of the two senders to better distinguish between the two flows. The VOQ Bandwidth chart shows that the second sender starts about 8 seconds after the first one has started. The chart also shows that both senders are transmitting at about 200 Mbps (1.2 to 1.7 and 1.3 to 1.7). The 1.6 to 1.4 and 1.7 to 1.5 plots show that the combined flow rate is about 300 Mbps which is the bottleneck output rate that we set in the Queue Table. The output rate from port 7 of both flows varies during the contention period since the interpacket times for each flow are not constant. There will be short time periods when one flow gets more bandwidth than the other. The Queue Length chart shows that during the period that both flows are active, the queue length does reach its capacity of 150,000 bytes.
The UDP iperf clients start running on the two hosts $n1p2 and $n1p3. Their output appears (perhaps intermixed) in the same window since the two sender processes were launched by the same process (the shell reading the script).
The output shows that the both clients have connected to their respective servers (n1p2 to n1p4 and n1p3 to n1p5) respectively, and have sent UDP traffic for 20 seconds at 200 Mbps. The output also shows that both servers have reported their final statistics back to their respective client processes. The first server reported that it received packets at a rate of 159 Mbps and observed that 22% of the expected packets were never received. The second server reported that it received packets at a rate of 180 Mbps and observed that 11% of the expected packets were never received. Both flows were able to transmit at higher than 150 Mbps (half of the link capacity) because they could transmit at 200 Mbps for some portion of their 20-second transmission period when the other flow was not active.
The sender shell script periodically starts new UDP iperf senders and will continue indefinitely unless the user kills the sender script.
The two UDP iperf receivers are still running on hosts $n1p4 and 4n1p5.
This part repeats the experiment but with the two flows going to different egress queues at port 7 (queues 300 and 301). The packet scheduler will divide the 300 Mbps link capacity at port 7 according to the quantum values in the Queue Table. In the simplest case where both flows get an equal share of the link capacity, both flows will get 150 Mbps during congestion periods but with less variation than when both flows were placed into the same queue.
Here are the major steps in this example:
We will direct packets from n1p2 to queue 300 and those from n1p3 to queue 301.
This CIDR address will match the IP address of n1p2.
This CIDR address will match the IP address of n1p3.
We configure queue 301 to have the same properties as queue 300: a threshold of 150000 bytes and a quantum of 2,048.
The P7Q301 plot will appear in the Queue Length chart.
usndrs-1nsp
The VOQ Bandwidth chart shows that the 1.6 to 1.4 and 1.7 to 1.5 plots have, as before, a combined flow rate of about 300 Mbps. But now, the flow rates are identical with no variation over time during the congestion period. The Queue Length chart shows that during the period that both flows are active, the queue length of both queues reaches their capacity of 150,000 bytes.
In prepartion for repeating the experiment with TCP instead of UDP traffic, we need to terminate all of our iperf processes. The following sequence of commands can be put into a shell script and run from onlusr:
onlusr> ssh $n1p2 pkill -n iperf onlusr> ssh $n1p3 pkill -n iperf onlusr> ssh $n1p4 pkill -n iperf onlusr> ssh $n1p5 pkill -n iperf
The above pkill commands will kill all of your most recent processes named iperf.
We can use the same setup for TCP traffic. Our setup will use the same forward path and use the reverse path for ACK packets sent by the receivers; i.e., ACK packets will go to port 6, then out the link back into port 7, and finally to the senders. The three charts will look different than those for UDP flows because of TCP's slow-start and congestion avoidance phases.
Here are the major steps in this example:
onlusr> trcvrs-1nsp >& log & onlusr> cat log.txt
tsndrs-1nsp
The VOQ Bandwidth chart shows the flows jockeying back and forth for bandwidth as they go through their TCP congestion avoidance algorithm: as one flow backs off, the other flow increases its transmission rate. But on average, each flow gets about 150 Mbps during the congestion period.
The Queue Length chart can be divided into two types of periods: no congestion and congestion. In the middle is the congestion period when both flows are contending for bandwidth. On either side of the congestion period, there is only one flow. So, that flow will consume the entire link capacity at port 7 and backoff only enough to keep the queue from overflowing. When packets begin arriving from n1p3 (the second flow), packets from the first flow will get dropped since queue 300 will only get half of its original transmission rate and the queue is already full. During this period, flow 2 will be increasing its transmission rate thus builing up the length of its queue.
The packet drop chart shows the initial spike of drops due to the first flow ramping up to the link rate during its slow-start phase and then backing off when it encounters packet drops. Then, when the second flow starts, it too ramps up rapidly but will detect packet drops earlier because its link share is 150 Mbps instead of 300 Mbps. Although the Queue Length chart doesn't show the queues hitting the limit of 150,000 bytes during this middle period, it would if the queue length counters were sampled more frequently.
The client report shows that both TCP flows received about 200 Mbps of bandwidth during the 20 second interval (206 Mbps and 195 Mbps).
The two TCP iperf receivers are still running on hosts $n1p4 and 4n1p5.
We delay ACK packets by 50 milliseconds by installing a delay plugin at egress port 6. Here are a couple of things to keep in mind:
The plugin table for port 6 will appear.
A partial plugin description for the pdelay plugin will appear.
This number will be used as a tag to bind to a GM filter. An acceptable number must fall into the range 8-127 because these are the IDs associated with the queues going to the SPC.
The spc qid field value should be the same as value in the linked to spc qid field of the Plugins dialogue box. The spc qid field indicates which queue to place a matching packet. Packets in any queue with number in the range 8-127 are sent to the SPC.
Alternatively, if we had enterred plugin instead of 8, the RLI will automatically provide a QID starting at 8. Note that the qid field is automatically computed by the RLI and is always 128 more in value than the valuein the spc quid field.
After the commit finishes, the instance field in the plugin dialogue box will contain an instance a non-negative integer (starting from 0) indicating the plugin instance number.
tsndrs-1nsp
onlusr> ssh $n1p2 pkill -n iperf onlusr> ssh $n1p3 pkill -n iperf onlusr> ssh $n1p4 pkill -n iperf onlusr> ssh $n1p5 pkill -n iperf
Revised: Fri, Jan 26, 2007
Tutorial >> Examples | TOC |