NPR Tutorial >> Filters, Queues and Bandwidth | TOC |
We give an overview of some of the basic features of iperf which is a traffic generation tool that allows the user to experiment with different TCP and UDP parameters to see how they affect network performance. Full documentation can be found at iperf.
A typical way to use iperf is to first start one iperf process
running in server mode as the traffic receiver, and then start another
iperf process running in client mode on another host as the
traffic sender.
In order to send a single UDP stream from n1p3 to n2p3,
we run iperf in server mode on n1p3 and iperf in
client mode on n2p3.
The iperf executable is located at /usr/local/bin/iperf
on every ONL host.
Since the directory /usr/local/bin by default is located
in every user's PATH environment variable, you should be able to
run iperf by entering
the iperf command followed by any command-line arguments.
For example, to send UDP traffic at 350 Mbps for 10 seconds from n1p3 to n2p3 in the dumbbell setup above we would enter the following:
Window | Command | Description |
---|---|---|
Window 1 | source /users/onl/.topology | Define environment variables such as $n1p3
Use /users/onl/.topology.csh if using a C-shell. (See The /users/onl/.topology File ) |
|
ssh $n2p3 | ssh to the iperf server host |
|
iperf -s -u | Run iperf as a UDP server:
( -s ) Run as server ( -u ) UDP |
Window 2 | source /users/onl/.topology | Define host environment variables |
|
ssh $n1p3$ | ssh to the client host |
|
iperf -c n2p3 -u -b 350m -t 10 | Run iperf as a UDP client:
( -c n2p3 ) Run as client with server on n1p2 ( -u ) UDP ( -b 350m ) 350 Mbps bandwidth ( -t 10 ) For 10 seconds |
Suppose that we use the same dumbbell configuration that we have been using; that is,
At the end of a UDP traffic session, the iperf client sends out a special application layer FIN packet signalling the end of transmission. The server responds to the FIN with a reply containing the statistics for the session. The client will continue to send this FIN packet every 250 milliseconds until either the server responds with its statistics or a total of 10 FIN packets have been sent. Since the server never closes the receive socket, it is possible for the server to receive FIN packets from a preceding session thinking that they belong to a new session. This situation leads to an "received out-of-order" message from the server.
The figure (right) shows the resulting traffic chart. The 1.3 line is the bandwidth into port 1.3 (from n1p3), and the 2.3 line is the bandwidth out of port 2.3 (to n2p3). The 2.1 line is hidden behind the 2.3 line, and is the bandwidth from port 1.4 going into port 2.1. The chart clearly shows the effect of the bottleneck on the 350 Mbps traffic coming from n1p3.
The table below shows examples of useful variants of iperf.
In UDP bandwidth specifications, "10m" represents 10 Mbps.
'M' could also be used to signify 1,000,000.
Similarly, 'K' and 'k' both indicate 1,000.
The -n argument (number of bytes) in conjunction with -l (length of
each datagram) is used to send a fixed number of datagrams of a
specified length and is often used with small values when some forms
of debugging are desired.
iperf -h | help: Show usage information |
iperf -s -u -w 1m | Run an iperf udp server with a 1 MB socket buffer (default is 108 KB) |
iperf -c n1p3 -u -b 10m -t 10 | Run an iperf udp client with n1p3 as the server
Send at 10 Mbps to n1p3 for 10 sec (1470-byte packets + 28 bytes of header) |
iperf -c n1p3 -u -b 10m -l 1000 -n 8000 | Run iperf udp client with n1p3 as the server
Send at 10 Mbps to n1p3. Send a total of 8000 bytes, 1000 bytes per pkt |
iperf -s -w 16m | Run iperf TCP server with a 16 MB receiver socket buffer |
iperf -c n1p3 -t 10 | Run an iperf TCP client with n1p3 as ther server for 10 sec |
Shell scripts can be used to generate the two UDP streams (n1p2 to n2p2 and n1p3 to n2p3) shown right but without having to open up two windows and manually enterring the commands to start the iperf servers and clients. Furthermore, we can start the clients at approximately the same time avoiding the seconds of delay it would take to switch from one window to the other.
We can produce coordinated UDP streams by writing two shell scripts:
one launches multiple iperf servers, and the other launches the
corresponding iperf clients.
For example, the following Bourne shell script 2urcvrs-2npr
(without line numbers)
will launch two iperf UDP servers on the hosts $n2p2 and $n2p3
with 1 MB socket buffers:
1 #!/bin/sh 2 # 3 source /users/onl/.topology # define env vars $n1p2, ... 4 ssh $n2p2 /usr/local/bin/iperf -s -u -w 1m & 5 ssh $n2p3 /usr/local/bin/iperf -s -u -w 1m &
onlusr> 2urcvrs-2npr
After the iperf servers have been started, the clients can be started with another script. For example, the following Bourne shell script 2usndrs-2npr will launch two iperf UDP clients sending maximum-sized packets at 200 Mbps for 20 seconds each:
1 #!/bin/sh 2 # 3 source /users/onl/.topology # define env vars $n2p2, ... 4 ssh $n1p2 /usr/local/bin/iperf -c n2p2 -u -b 200m -t 20 & 5 sleep 8 6 ssh $n1p3 /usr/local/bin/iperf -c n2p3 -u -b 200m -t 20 &
The figure (right) shows the resulting bandwidth chart. There are five lines:
Line | Mbps |
---|---|
1.3 black |
in port 1.3 |
1.2 blue |
in port 1.2 |
2.1 green |
in port 2.1 |
2.2 red |
out port 2.2 |
2.3 fuchsia |
out port 2.3 |
We used the same configuration as the one we used earlier to chart the bandwith of ping commands. The chart demonstrates the effect of traffic overload on the bottleneck link between ports 1.4 and 2.1 and that the script is working properly:
Revised: Tue, Aug 26, 2008
NPR Tutorial >> Filters, Queues and Bandwidth | TOC |