LANforge Virtual Appliance

virtualization-icon-300pxWe just uploaded a LANforge virtual appliance in OVA format. Please give it a try, we need some feedback on how well it is to test out our product.

This is a Fedora 21, LANforge version 5.3.4 guest.


LANforge with Netperf-Wrapper

LANforge-ICE can be setup to be a controlled emulated network with impairments to demonstrate the effect of buffer sizes on network traffic and model the types of networks seen in the real world. Here is an example using netperf-wrapper as the traffic source.

Below is the network under test…created with LANforge-ICE. The two physical interfaces eth2 and eth3 are connected together with an emulated network consisting of a T1 link to a series of three routers with OC3 links between them. Each T1 wanlink is configured with 40ms of round-trip latency and each OC3 wanlink has 20ms of round-trip latency so that a ping from eth2 to eth3 on the traffic source machine shows about 122ms round-trip latency through the emulated network.

Netsmith configuration for Resource:  ice-si-dmz(1.2)  Version: 5.2.13_026

traceroute -n -i eth2
traceroute to (, 30 hops max, 60 byte packets
 1 41.100 ms 41.102 ms 41.089 ms
 2 61.094 ms 61.090 ms 61.081 ms
 3 81.117 ms 81.128 ms 81.118 ms
 4 122.190 ms 122.190 ms 122.181 ms

ping -I eth2
PING ( from eth2: 56(84) bytes of data.
64 bytes from icmp_req=1 ttl=61 time=122 ms
64 bytes from icmp_req=2 ttl=61 time=122 ms
64 bytes from icmp_req=3 ttl=61 time=122 ms
64 bytes from icmp_req=4 ttl=61 time=122 ms
--- ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 122.358/122.426/122.495/0.252 ms

Netserver and Netperf-Wrapper are running on a separate machine that is physically connected to eth2 and eth3 on the emulation machine. Netserver is running with the following command:

netserver -L

Netperf-Wrapper is run several times with the following command:

netperf-wrapper -H –local-bind -t test-tcpbi-<backlogbuffer> tcp_bidirectional -f plot

Between each netperf-wrapper run, the backlog buffers on the T1 wanlinks were modified to demonstrate the effect of buffer size in the network. All interfaces in this example have the default tx queue length set to 1000 packets.

The buffer sizes used were small at 8KB, large at 32KB and auto-sized which resulted in about 80KB backlog buffer.

Results are shown as combined box totals and combined totals for all three test runs. Links to json.gz files are at the bottom of the post.



Now the same three network conditions with the rrul test with the following netperf-wrapper command:

netperf-wrapper -H –local-bind -t test-rrul-<backlogbuffer> rrul -f plot

Results with small 8KB buffers.


Results with large 32KB buffers.


Results with auto-sized buffers.


Results as combined box totals.


Links to json.gz files:






rrul-2014-09-22T113549.828810.test_rrul_AUTO.json.gz for more information on LANforge features. for more information on Netperf-Wrapper.