The ONL NPR Tutorial

NPR Tutorial >> Router Plugins TOC

New Window?

Installing A Plugin

[[ 2tcp-flows-resize.png Figure ]]

This page describes how to install a predefined plugin. We demonstrate the procedure using the delay plugin as the exemplar. In the example, there are two TCP flows with senders shown on the left and receivers on the right such that n1p2 sends packets to n2p2 and n1p3 sends to n2p3. All data packets are maximum length (1470 bytes). Output port 1.4 will be a 10 Mbps bottleneck which is fed from two a 300,000 byte queues (queue 64 for n1p2 traffic and queue 65 for n1p3 traffic). We will delay ACK packets arriving to NPR 2 (right) by 50 msec (milliseconds). The detailed steps in a similar example are given in Examples => TCP With Delay .

Recall that the steps involved in installing a plugin are:

We will load the delay plugin in NPR 2's plugin microengine 0. Then, we will install filters at ports 2.2 and 2.3 to direct the ACK packets to the delay plugin and then on to queue 64 at output port 2.1.

Below are two figures that schematically show the key components that govern the two TCP flows. The first figure shows the forward path taken by the data packets as they leave the two senders. The second figure shows the reverse path taken by the returning ACK packets as they leave the two receivers.

[[ 2tcp-data-path-resize.png Figure ]]

In the forward path, we install a filter F at each of the input ports 1.2 and 1.3 to direct packets to reserved queues 64 and 65 respectively at output port 1.4 (the bottleneck). We will configure the two queues so that they get an equal share of the 10 Mbps output port capacity. Each of the two reserved queues will be configured to hold atmost 150,000 bytes. When the packets from both flows reach port 2.1, the routing table R forwards them to their respective output ports (2.2 and 2.3) where they are queued in default datagram (non-reserved) queues for transmission to n2p2 and n2p3 respectively. All ports have a capacity of 1 Gbps except the bottleneck port.

[[ 2tcp-ack-path-resize.png Figure ]]

In the reverse path, we install a filter F at each of the input ports 2.2 and 2.3. The filters will be configured to direct all packets (TCP, ICMP, ARP, etc.) to the delay plugin D and then on to reserved queue 64 at output port 2.1. The delay plugin has been programmed to have a default delay of 50 msec. We use default datagram queues at all output ports for the ACK packets except output port 2.1 and accept the default port capacities of 1 Gbps for all output ports.

In the explanation below, we show only the installation of the plugin and the associated filters and assume that you know how to install the other filters, configure the bottleneck port and generate the TCP traffic. We begin with the installation of the delay plugin.

 

Load The Pre-Defined Delay Plugin Into NPR 2, Microengine 0

There are pre-defined (standard) plugins available to the user. The code for these plugins is located in the subdirectories of the directory ~onl/npr-plugins/. For example, the delay plugin source code is located at ~onl/npr-plugins/delay/delay.c. Most of these plugins were written to demonstrate various capabilities of plugins. As of this writing, the pre-defined plugins are:

Details of these plugins are documented in README files stored in their source code directories (e.g., ~onl/npr/plugins/delay/README). They are also described later in this tutorial.

Other plugins that should come on-line soon demonstrate how to: modify a packet; drop a packet; drop a packet and delay undropped packets; send packet headers to an endhost; emulate packet transmission delay; and transmit packets in priority order.

To load the pre-defined (standard) delay plugin, you must:

 

Create a Filter

Now we need to install filters at input ports 2.2 and 2.3 to direct packets to the delay plugin. To do this for input port 2.2, we need to:

 

Test The Delay Plugin

Now we test that packets sent from n1p2 to n2p2 and from n1p3 to n2p3 will be delayed by 50 msec by sending some ping packets from the senders to the receivers.

[[ ping-delay-resize.png Figure ]]
The figure (above) shows that we logged into $n1p2 (onl032) and sent five ping packets to n2p2. The RTT of the first packet was 226 msec, much larger than the expected 50 msec. The extra time was because an ARP request-reply sequence occurred before the first packet could be forwarded to its destination. But the RTT of the remaining four packets is 50.1 msec &mdash very close to the desired delay considering that the RTT would be in the tenths of milliseconds with out the delay plugin.

The next two figures shows the bandwidth and queue length charts when we send TCP traffic using iperf. The second flow from n1p3 starts about 4 seconds after the one from n1p2. The "1.2 rx" and "1.3 rx" curves are the bandwidths seen at input ports 1.2 and 1.3, and the "2.2 tx" and "2.3 tx" curves are the bandwidths seen at output ports 2.2 and 2.3 respectively. [[ delay50-bw.png Figure ]]

Both flows start by doubling the send window every RTT as they go through their slow-start phase. This phase ends when several packets are dropped. Then, fast retransmission/fast recovery begins and perhaps a timeout eventually occurs. Finally, another slow-start period starts but with an ssthresh value that limits the send window to something near the available transmission capacity. When the second flow contends for the 10 Mbps bottleneck capacity, each flow gets about 5 Mbps (an equal share of the transmission capacity). When the first flow finishes, the second flow gets the entire capacity of the bottleneck link.

Note that since our monitoring period is 1 second, the charts won't show the fine details of the slow-start phase. But a rough calculation will show that the slow-start period is about right. Since the senders are sending maximum-sized packets (1470 bytes + 40 bytes of headers), the packets are about 12,000 bits. That means that the input rate in the first RTT is about (12,000 bits/50 msec) = 0.24 Mbps. This rate doubles every RTT so that in round k, the effective transmission rate of the sender will be about 0.24 x 2^k or 0.24 x 2^k. Since the bottleneck link is 10 Mbps, we seek the smallest k such that:

	0.24 x 2^k Mbps >= 10 Mbps
So, we expect that it will take just under 6 RTTs or about 300 msec to experience the first packet drop. Since 300 msec is much less than our monitoring period of 1 second, we shouldn't be surprised that the bandwidth chart doesn't show that the first flow didn't reach 10 Mbps during its first slow-start phase. But note that it does reach that rate in its second slow-start phase and remains there until the second flow contends for the bottleneck capacity.
[[ delay50-qlen.png Figure ]]

The queue length charts monitor the length of queues 64 and 65 at output port 1.4. Both queues were configures to have a capacity (threshold) of 150,000 bytes. We see the classic saw-tooth pattern of the queue lengths during the congestion avoidance phase where the sender windows increase by one packet every RTT. When they detect a packet drop, they cut their window in half and begin the bandwidth probe again.

If we had also monitored packet drops at port 1.4 as described in Monitoring Queue Length and Packet Drops, we would have seen drops correlated with the queues reaching their capacities of 150,000 bytes.

What if we want a delay different than 50 msec? The next page describes how to configure the delay plugin for a different delay value and how to get counts such as the maximum number of packets queued in the delay queue.


 Revised:  Thu, Sep 25, 2008 

  
  

NPR Tutorial >> Router Plugins TOC