NPR Tutorial >> Router Plugins | TOC |
Suppose that you want to chart the value of a plugin variable. The RLI makes it easy to do this through its monitoring menus. For example, the nstat plugin keeps track of the number of ICMP, TCP and UDP packets by recording these values in special registers called PluginCounters. The RLI makes it easy to chart these PluginCounter values through the Monitoring => PluginCounter menu item available in each NPR. This section begins with a description of PluginCounters. Then, it shows how to chart the values stored in PluginCounters by showing you how to chart the ICMP, TCP and UDP counters used by the nstats plugin.
The nstats plugin counts the number of ICMP, TCP and UDP packets that it receives. These three counts are stored in the first three register counters available in each plugin ME for storing general counts (four are available). These register counter values are labeled as PluginCounters in the RLI and are visible to the user through the Monitoring => PluginCounter menu item available with each NPR. To the RLI user, the three counters of interest are known as PluginCounter 0, PluginCounter 1, and PluginCounter 2. (Note: These same counters can also be accessed through the Edit => Add Plugin Data to Graph menu item in the Plugin Table.) Although the figure (right) shows the nstats plugin loaded into ME 0, it can be loaded into any of the five plugin MEs.
The general concept of register counters was briefly described in
the tutorial page
NPR Tutorial => Packet Processing.
The register counters available to the plugin MEs are listed in
NPR Tutorial => Summary Information => Counters
as registers 38-57.
The 20 counters are evenly divided among the five plugin MEs.
(Aside: A plugin programmer refers to these counters through names that
have the form ONL_ROUTER_PLUGIN_x_CNTR_y where x denotes the ME
number and y denotes the counter number (i.e., x is in the range 0:4,
and y is in the range 0:3).)
We will use the dumbbell network shown right. The hosts n1p1, n1p2 and n1p3 on the left send ICMP, UDP and TCP traffic respectively to their counter parts on the right (i.e., n1p1 sends ICMP packets using ping to n2p1; n1p2 sends UDP packets using iperf to n2p2; and n1p3 sends TCP packets using iperf to n2p3). All forward traffic goes through the 3.415 Mbps bottleneck at port 1.4 and share a reserved queue that has a capacity of 300,000 bytes. This means that there is a filter at each of the input ports attached to the senders.
Furthermore, we stagger the starting times of the three senders
so that the ICMP traffic from n1p1 starts first.
After four seconds, the UDP traffic from n1p2 starts, and then after
another four seconds, the TCP traffic from n1p3 starts.
The nstats plugin is installed in microengine 0 of NPR 2
(the NPR on the right)
and auxilliary filters are installed at input port 2.0 to direct
copies of the incoming traffic from NPR 1 traffic to the plugin.
Specifically, one auxilliary (sampling) filter sends a 12.5% sample of TCP
packets to the plugin, and another auxilliary filter sends
all ICMP and UDP packets to the plugin.
This second filter is actually configured to send all packets to the
plugin but with a priority lower than the first filter that matches
only TCP packets.
We would like to plot the values of the three nstats packet counters. Suppose that we have already created an empty nstats counts chart using the Monitoring => Add Monitoring Display menu item in the main RLI window. Below is an example of how to plot the number of ICMP packets sent to the nstats plugin:
An Add Parameter dialogue window appears.
We loaded the nstats plugin into ME 0 earlier.
The ICMP packet count is maintained in PluginCounter 0 by the plugin. This field should be 1 for the TCP plot and 2 for the UDP plot.
The chart label will appear in the nstats counts window.
The result is shown to the right.
The two figures below show the result of sending the traffic described earlier. We started the traffic by running the following script from the onlusr host:
1 source /users/onl/.topology # get defs of external interfaces 2 ssh $n1p1 ping -c 120 -s 1400 -i 0.2 n2p1 > /dev/null & # icmp 3 sleep 4 4 ssh $n1p2 /usr/local/bin/iperf -c n2p2 -u -w 1m -t 20 & # udp 5 sleep 4 6 ssh $n1p3 /usr/local/bin/iperf -c n2p3 -w 16m -t 0.01 & # tcp
The chart on the left is the chart we got from running the traffic script. The chart on the right is the result from zooming in on the chart on the left (View => Zoom In From Selection).
The chart on the right shows the four-second stagger of the starting time of the three flows and that the ICMP traffic lasts for 24 seconds as expected. The chart on the left shows that about 1705 UDP packets were sent in about 20 seconds, and the nstats plugin saw about 350 packets. Furthermore, the slopes of the ICMP and UDP plots are nearly linear indicating that the traffic was sent at a near constant rate.
The iperf UDP window (not shown) shows that 2.39 MB of UDP traffic was received and that 81 packets out of 1785 packets were lost. A rough calculation shows that the UDP packet count makes sense. Since the packet length is 1,470 bytes, the 2.39 MB received is 2.39 x 1024 x 1024 / 1470 bytes/pkt or 1,704 packets. Also, if we send maximum-sized packets (about 1500 bytes) at 1 Mbps for 20 seconds, the number packets sent (roughly) will be 1 Mbps x 20 sec / 12,000 bits/pkt or about 1,700 packets.
The TCP packet count is not as straightforward to verify. The iperf TCP window (not shown) and the bandwidth chart of the sending traffic rates gives us some idea whether the TCP packet count makes sense.
The iperf TCP window showed that 3.99 MB of TCP traffic was received at an average rate of 2.32 Mbps. This 2.32 Mbps is close to the 2.415 Mbps of remaining output port capacity. But the 3.99 MB received by the server might seem strange since we only sent TCP traffic for only 0.1 second. The bandwidth chart suggests a partial explanation.
The chart on the left shows the bandwidths of the traffic coming from the senders (i.e., Monitoring => RXBYTE values), and the chart on the right shows a close up of the region when traffic began.
The TCP sending rate corresponds to the 1.3 rx curve; i.e., it is the traffic from n1p3. It appears that during the slow-start phase, the sending rate reaches about 5 Mbps even though the bottleneck is 3.415 Mbps. Then, the sending rate is throttled twice before it settles into around 2.4 Mbps (i.e., the residual capacity of the bottleneck). It appears that about 2.4 Mbps of traffic is sent for about 15 seconds or 3,000 packets. 3.99 MB of maximum-sized packets is about 2,846 packets. Since we sampled only 12.5% of these packets, we would expect about 356 TCP packets at the nstats plugin which is what we saw earlier. Only an examination of tcpdump output would reveal that the sender did indeed send out 2,846 packets in the first 0.1 second and then spent the remaining 15 seconds retransmitting some of these packets.
Revised: Fri, Oct 3, 2008
NPR Tutorial >> Router Plugins | TOC |