LANforge 5.3.1 Released with WiFi Testing Emphasis

LANforge 5.3.1 has been a serious piece of work to put out. We were dogged by lots of troubles with the QCA firmware for the 802.11ac chipset. Ben put in countless hours combing through stack traces and kernel archives looking for fixes to DMA errors. Amazingly he fixed so many we are able to keep Layer-3 connections alive indefinitely now.

This was also an interesting development period because we got our first pair of Octobox anechoic chambers. These are RF-isolating boxes with RF impeding power and network interfaces at their margins. This was the first time we saw ideal 802.11n throughput in 5Ghz. Like…theoretical ideal. Checked off the list. Ben picked his jaw up off the floor when he saw Isaac’s graphs. Asked him to “do that again!”

Jed spent much time doing 802.11x station roaming experiments. The trick to roaming effectively is to shuffle the roaming requests between radios, not to send sequential roaming (stampeding) requests from a single radio. It would make sense from a CSMCA point of view that a WiFi environment wants to actively listen for activity instead of constantly broadcast. Jed was able to get 30 wifi roaming events a second between two LANforge CT523’s running 164 virtual stations.

This lead us to a great LANforge 5.3.1 release. Lot of work. More planned. Stay tuned.

LANforge with Netperf-Wrapper

LANforge-ICE can be setup to be a controlled emulated network with impairments to demonstrate the effect of buffer sizes on network traffic and model the types of networks seen in the real world. Here is an example using netperf-wrapper as the traffic source.

Below is the network under test…created with LANforge-ICE. The two physical interfaces eth2 and eth3 are connected together with an emulated network consisting of a T1 link to a series of three routers with OC3 links between them. Each T1 wanlink is configured with 40ms of round-trip latency and each OC3 wanlink has 20ms of round-trip latency so that a ping from eth2 to eth3 on the traffic source machine shows about 122ms round-trip latency through the emulated network.

Netsmith configuration for Resource:  ice-si-dmz(1.2)  Version: 5.2.13_026

traceroute -n -i eth2
traceroute to (, 30 hops max, 60 byte packets
 1 41.100 ms 41.102 ms 41.089 ms
 2 61.094 ms 61.090 ms 61.081 ms
 3 81.117 ms 81.128 ms 81.118 ms
 4 122.190 ms 122.190 ms 122.181 ms

ping -I eth2
PING ( from eth2: 56(84) bytes of data.
64 bytes from icmp_req=1 ttl=61 time=122 ms
64 bytes from icmp_req=2 ttl=61 time=122 ms
64 bytes from icmp_req=3 ttl=61 time=122 ms
64 bytes from icmp_req=4 ttl=61 time=122 ms
--- ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 122.358/122.426/122.495/0.252 ms

Netserver and Netperf-Wrapper are running on a separate machine that is physically connected to eth2 and eth3 on the emulation machine. Netserver is running with the following command:

netserver -L

Netperf-Wrapper is run several times with the following command:

netperf-wrapper -H –local-bind -t test-tcpbi-<backlogbuffer> tcp_bidirectional -f plot

Between each netperf-wrapper run, the backlog buffers on the T1 wanlinks were modified to demonstrate the effect of buffer size in the network. All interfaces in this example have the default tx queue length set to 1000 packets.

The buffer sizes used were small at 8KB, large at 32KB and auto-sized which resulted in about 80KB backlog buffer.

Results are shown as combined box totals and combined totals for all three test runs. Links to json.gz files are at the bottom of the post.



Now the same three network conditions with the rrul test with the following netperf-wrapper command:

netperf-wrapper -H –local-bind -t test-rrul-<backlogbuffer> rrul -f plot

Results with small 8KB buffers.


Results with large 32KB buffers.


Results with auto-sized buffers.


Results as combined box totals.


Links to json.gz files:






rrul-2014-09-22T113549.828810.test_rrul_AUTO.json.gz for more information on LANforge features. for more information on Netperf-Wrapper.

Emulate 64 /ac WiFi stations with LANforge

CT525 2U chassis, front
CT525 2U chassis, front

Ben has done impressive work on the Atheros 10K driver. He has really re-organized a lot of code to save every possible bit and byte. We can now emulate 64 wifi stations on the ath10k hardware. If you wanted to test your /ac roll-out, you could do that with 384 virtual stations with a 2U chassis. Or, if you wanted to pack some /n stations in there, you could emulate 792 stations in a similar form factor. (And of course this is all creeping up on the 1200 station pure /n product we presently have.)