Open Network Laboratory

A Resource for Networking Researchers and Educators


When citing ONL, please use the following Citation:
A Remotely Accessible Network Processor-Based Router for Network Experimentation, by Charlie Wiseman, Jonathan Turner, Michela Becchi, Patrick Crowley, John DeHart, Mart Haitjema, Shakir James, Fred Kuhns, Jing Lu, Jyoti Parwatikar, Ritun Patney, Michael Wilson, Ken Wong and David Zar. In ''Proceedings of ANCS'', 11/2008.

We are always accepting new users.
Click on the Get an account link in the sidebar and follow the instructions.

The Open Network Laboratory is a resource for the networking research and educational communities, designed to enable experimental evaluation of advanced networking concepts in a realistic working environment. The National Science Foundation has supported ONL through grants CNS 0230826, CNS 05551651 and MRI 12-29553. The laboratory is built around open-source, extensible, high performance routers that have been developed at Washington University, and which can be accessed by remote users through a Remote Laboratory Interface (RLI). The RLI allows users to configure the testbed network, run applications and monitor those running applications using the built-in data gathering mechanisms that the routers provide. The RLI provides support for data visualization and real-time remote displays, allowing users to develop the insights needed to understand the behavior of new capabilities within a complex operating environment.

The testbed contains two types of routers: Network Processor-based Routers (NPRs) and Software Routers (SWRs). The NPRs are built around high performance Network Processor (NP) server blades in an ATCA chassis (ATCA stands for Advanced Telecommunications Computing Architecture and is a widely-supported industry standard for advanced networking subsystems). The SWRs are hosted on standard Dell servers running the Ubuntu operating system. They come in a variety of configurations. These technologies enable researchers working in this environment to evaluate their ideas in a much more realistic context than can be provided by PC-based routers using commodity hardware, and operating systems tailored to the needs of desktop computing. Researchers seeking to transfer their ideas to commercial practice need to be able to demonstrate those ideas in a realistic setting. The Open Network Laboratory provides such a setting, allowing systems researchers to evaluate and refine their ideas, and then to demonstrate them to those interested in moving the technology into new products and services.

The organization of the ONL is shown to the right. The facility has 36 extensible gigabit routers: 14 NPRs and 22 SWRs. An NPR has five ports, and the SWRs come in a variety of configurations. (for example: 5 1G interfaces, 2 10G interfaces, 16 1G interfaces or 8 1G interfaces). These routers can be linked together in a variety of network topologies, using a central Virtual Network Switch (VNS), which serves as an electronic patch panel. The facility also includes over 100 computers, which serve as end systems. Some of these are connected to their routers through gigabit Ethernet subnetworks and others are connected to the VNS, to allow flexible connection of hosts to routers. ONL also includes multi-core servers which support Virtual Machines (VMs) for end user use.

The figure below shows the user interface. New configurations can be built by instantiating routers and hosts and connecting them together graphically. Routing tables can configured through pull-down menus accessed by clicking on router ports. Packet filters and custom software plugins can be specified in much the same way. Once a configuration has been created, it can be saved to a file for later use. When a user is conducting an experiment, the configuration is submitted to the ONL management server, which generates the low level control messages to configure the various system components to realize the specified configuration.

The graphical interface also serves as a control mechanism, allowing access to various hardware and software control variables and traffic counters. The counters can be used to generate charts of traffic rates, or queue lengths as a function of time, to allow users to observe what happens at various points in the network during during their experiment and to allow them to document the results of the experiment for presentation and publication. An example of such a chart is shown to the right.

The NPR routers are built around Intel's IXP 2800 network processor. A single NPR takes care of standard router tasks such as route lookup, packet classification, queue management and traffic monitoring. A plugin environment (with significant processing and memory resources) and API makes it possible to extend the NPR's basic functionality with custom features running at wire-speed.

The IXP 2800 is designed specifically for rapid development of high-performance applications and systems. There are 16 multi-threaded MicroEngine (ME) cores which are responsible for packet processing. Five of these MEs are reserved for plugins. Each ME has eight hardware thread contexts (i.e., eight distinct sets of registers) which facilitate the hiding of memory latencies. Each ME also has a small hardware FIFO (next neighbor ring) connected to one other ME, enabling fast pipelined activity. A block diagram of the IXP 2800 appears at right.

The IXP 2800 has a memory hierarchy consisting of: 1) 3 banks of high performance Rambus DRAM; 2) 4 banks of Quad Data Rate SRAM; 3) a small, shared on-chip scratchpad memory; 4) a small memory local to each ME along with a dedicated program store; and 5) various types of registers including 256 general-purpose registers on each ME. The DRAM is used for packet buffers, and the SRAM contains packet meta-data, ring buffers for inter-block communication, and large system tables. One of the SRAM channels also supports a TCAM (Ternary Content-Addressable Memory) which we use for IP route lookup and packet classification. The scratchpad memory is used for smaller ring buffers and tables.

There is also an (ARM-based) XScale core (not shown) that is used for overall system control and unusual event handling. The Xscale runs Linux, and libraries provide applications direct access to memory and MEs.

The main packet path through the IXP 2800s MEs appears in the highlighted portion of the figure above. The main data flow proceeds in a pipelined fashion starting with the Receive block (Rx) and ending with the Transmit block (Tx).

Except for the Rx, PLC and Plugins blocks, each block is implemented using one ME. The block labeled Plugins consists of five MEs allowing users to load up to five different code blocks.

The photo below left shows the seven Radisys 7010 network processor blades housed in an ATCA chassis. Each blade contains two IXP 2800s and implements two 5-port NPRs.



The photo (right) shows one of the boards with its two IXP 2800 NPs, shared TCAM, and 10x1 gigabit ethernet card.


Supported by: National Science Foundation under grants CNS 0230826, CNS 05551651 and MRI 12-29553. Disclaimer: Any opinions, findings and conclusions or recomendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF).