Adjacent Wifi Channel Throughput Testing

What is the effect of wifi clients on adjacent channels when performing over-the-air wifi throughput testing?

We decided to investigate this question to help show why it happens and also to determine if the physical proximity of the wifi NICs could also be to blame.

After running the tests described below, we found that co-channel interference happens with Access Points using adjacent channels and with LANforge using physically adjacent wifi NICs. LANforge test results show better on-air throughput performance when the PCI-E NICs are physically separated by different systems rather than having the wifi NICs side by side in a single platform.

The test setup consists of two LANforge systems, a CT523c and a CT523b and two identical test APs. The CT523c, labeled LF1, has two 4×4 wifi NICs with clients wlan0 and wlan2. The CT523b, labeled LF2, has two 4×4 wifi NICs, but only one client, wlan2, is in use. The test APs are each setup with a single unique SSID using open authentication and are physically placed about 1m from each LANforge system. Each AP is also using its specified channel at 80MHz channel width.

Each test iteration is a 30 second download test, then 30 second upload test for each individual wifi client and then again for combinations of clients for download and upload.

First test: AP-A on ch100 and AP-B on ch116 – here is what it looked like on inSSIDer:
ch100-ch116-ht80

The baseline results of each individual client show that the test setup has ideal conditions with all clients reporting similar results:
adjacent-channel-tput1

The combination tests show that the clients’ throughput results suffer from the adjacent channel interference:
adjacent-channel-tput1b

If we look at a spectrum analyzer while testing, we can see that the overlap at 5.57GHz between channels 112 and 116 is the reason for the interference.
ch100-ch116-spec

The next test is a repeat of the first but on channels 36 and 52 which are also adjacent to each other.

Test 2: AP-A on ch36 and AP-B on ch52
ch36-ch52-ht80

Test 2 results:
adjacent-channel-tput2

As we move AP-B over to channels further away from AP-A, the combination tests show that the clients experience less interference and therefore show better throughput results:

Test 3: AP-A on ch36 and AP-B on ch100
ch36-ch100-ht80

Test 3 results:
adjacent-channel-tput3

Test 4: AP-A on ch36 and AP-B on ch149
ch36-ch149-ht80

Test 4 results:
adjacent-channel-tput4

Another important conclusion is that the throughput results for a single active NIC in each LANforge system is better than having both NICs active in a single LANforge system. We can show the difference by comparing the total on-air throughput.

In Test 1, channels 100 and 116:
The all-in-one on-air download throughput is 371Mbps + 674Mbps = 1045Mbps.
The separated on-air download throughput is 467Mbps + 963Mbps = 1430Mbps.
1430/1045 = 1.37 (separating the NICs shows a 37% increase).

In Test 2, channels 36 and 52:
The all-in-one on-air download throughput is 351Mbps + 747Mbps = 1098Mbps.
The separated on-air download throughput is 487Mbps = 974Mbps = 1461Mbps.
1461/1098 = 1.33 (separating the NICs shows a 33% increase).

In Test 3, channels 36 and 100:
The all-in-one on-air download throughput is 948Mbps + 966Mbps = 1914Mbps.
The separated on-air download throughput is 982Mbps + 974Mbps = 1956Mbps.
1956/1914 = 1.02 (separating the NICs shows a 2% increase).

In Test 4, channels 36 and 149:
The all-in-one on-air download throughput is 982Mbps + 980Mbps = 1962Mbps.
The separated on-air download throughput is 981Mbps + 967Mbps = 1948Mbps.
1962/1948 = 1.007 (separating the NICs shows a 0.7% decrease).

We hope that this post helps remind you that wifi is subject to RF interference from a variety of sources including active clients on a neighboring channel as well as the active wifi NIC in the next PCI-E slot.

If you would like more info on using LANforge for wireless testing, please contact us at sales@candelatech.com or call +1-360-380-1618.

Measuring Jitter

In verifying our latest software release for LANforge, I wanted an independent way to verify and measure the jitter imposed on a wanlink. There are many ways to accomplish this, but here is a summary of my procedure.

  • Synchronize clocks
  • Setup a wanlink with delay and jitter
  • Start captures on sender and receiver
  • Start UDP traffic in one direction
  • Run the captures through a script to estimate the jitter
  • Compare to LANforge jitter measurement

The information from the following links was very helpful in creating a bash script to calculate jitter from wireshark captures:

The network topology I used was two computers each with a network interface connected to the other through a switch. Each computer is running the LANforge software, so I was able to create a wanlink on one interface so that the latency and jitter impairments would affect packets leaving that interface.

netsmith-wanlink

To synchronize the clocks, I used ntpd on one system and ntpdate on the other.

The wanlink was set to 1.544Mbps with 10ms of delay and 100ms of jitter at a frequency of 1%. As I was capturing traffic, I varied the jitter frequency from 1% to 10% to 50% then 0%. See the LANforge User Guide for a detailed explanation of how jitter is set:

https://www.candelatech.com/lfgui_ug.php#wl_mod

wanlink

I initiated the wireshark captures from the LANforge Port Manager, but could have also used tshark to do this.

The UDP traffic was a one-way 56Kbps connection from eth1 on one system to the virtual interface behind the wanlink on the other system. The connection ran for about 180 seconds while I varied the jitter frequency on the wanlink.

I stopped the captures and filtered out all other traffic such as arps or icmps so that only lanforge protocol traffic remained and then ran the captures through the getjitter.bash script which also generated this image:

1-10-50-0-jitter

This matched up to what the LANforge Dynamic Report tool displayed for the jitter measurement on the connection:

1-10-50-0-jitter-dyn-rpt

The bash script below is used as follows:

./getjitter.bash sender.pcap receiver.pcap

It uses tshark, bc, gnuplot and qiv. It could probably be cleaned up, but the jitter estimate appears to be accurate.

#!/bin/bash

tshark -r $1 -T fields -e frame.time_epoch > frametimeS
tshark -r $2 -T fields -e frame.time_epoch > frametimeR
tshark -r $1 |awk '{print $2}' > ticks
tshark -r $1 -T fields -e udp.length > rates

declare -a S=(`cat frametimeS`)
declare -a R=(`cat frametimeR`)
declare -a T=(`cat ticks`)
declare -a Rate=(`cat rates`)
jitter=0
i=0

j=`echo ${#S[@]}`
let j=$j-2

for ((k=0; k<=$j; k++)); do
 #echo "${R[$i]} ${R[i+1]}"
 D=`echo "(${R[i+1]} - ${R[$i]}) - (${S[i+1]} - ${S[$i]})" |bc |tr -d -`
 rate=`echo "((${Rate[$i]}*8) / (${S[i+1]} - ${S[$i]}))" |bc |tr -d -`
 jitter=`echo "$jitter + ($D - $jitter)/16" |bc -l`
 #printf "%12s - %12s = %12s , %12s\n" ${R[i+1]} ${R[$i]} $D $jitter
 echo "${T[$i]} $jitter $rate" >> myjitter
 i=$i+1
done

echo 'set style data points' > jitter.gp
echo 'set nogrid' >> jitter.gp

echo 'set style line 1 lt 1 lw 2' >> jitter.gp
echo 'set style line 2 lt 2 lw 2' >> jitter.gp
echo 'set style line 3 lt 3 lw 5' >> jitter.gp
echo 'set style line 4 lt 3 lw 1' >> jitter.gp
echo 'set style line 5 lt 3 lw 2' >> jitter.gp
echo 'set style line 6 lt 3 lw 1' >> jitter.gp
echo 'set style line 7 lt 17 lw 2' >> jitter.gp
echo 'set style line 8 lt 17 lw 4' >> jitter.gp

echo 'set xlabel "Time (sec)"' >> jitter.gp
echo 'set ylabel "Jitter (sec)"' >> jitter.gp

echo 'plot "myjitter" using 1:($2/1) title "jitter" with impulses ls 6' >> jitter.gp

echo 'set term png' >> jitter.gp
echo 'set output "jitter.png"' >> jitter.gp
echo 'replot' >> jitter.gp

gnuplot jitter.gp 2>&1 1>/dev/null
qiv jitter.png &

The next steps for improving this jitter measurement would be to incorporate a sequence number check to look for drops and a latency measurement to get a more detailed evaluation of the impairments setup on the wanlink.

For the latency measurement, I came up with this script used as follows:

./getlatency.bash sender.pcap receiver.pcap

It makes use of the lanforge protocol as decoded by wireshark to acquire and verify sequence numbers.

#!/bin/bash

tshark -r $1 -T fields -e lanforge.seqno > seqno1
tshark -r $2 -T fields -e lanforge.seqno > seqno2
tshark -r $1 -T fields -e frame.time_epoch > frame1
tshark -r $2 -T fields -e frame.time_epoch > frame2
tshark -r $1 |awk '{print $2}' > ticks

declare -a array1=(`cat seqno1`)
declare -a array2=(`cat seqno2`)
declare -a frame1=(`cat frame1`)
declare -a frame2=(`cat frame2`)
declare -a T=(`cat ticks`)
latency=0
i=0

for key in "${!array1[@]}";
do
 if [ $key == ${array2[$key]} ]; then
 array1[$key]=${frame1[$key]}
 array2[$key]=${frame2[$key]}
 else
 unset array1[$key]
 unset array2[$key]
 fi
 
 latency=`echo "(${array2[$key]} - ${array1[$key]})*1000" |bc -l`
 if [[ $latency == 1[1-9][1-9]* ]]; then
 echo "${T[$key]} $latency" 
 fi
 echo "${T[$key]} $latency" >> mylatency
done

echo 'set style data points' > latency.gp
echo 'set nogrid' >> latency.gp

echo 'set style line 1 lt 1 lw 2' >> latency.gp
echo 'set style line 2 lt 2 lw 2' >> latency.gp
echo 'set style line 3 lt 3 lw 5' >> latency.gp
echo 'set style line 4 lt 3 lw 1' >> latency.gp
echo 'set style line 5 lt 3 lw 2' >> latency.gp
echo 'set style line 6 lt 3 lw 1' >> latency.gp
echo 'set style line 7 lt 17 lw 2' >> latency.gp
echo 'set style line 8 lt 17 lw 4' >> latency.gp

echo 'set xlabel "Time (sec)"' >> latency.gp
echo 'set ylabel "Latency (ms)"' >> latency.gp

echo 'plot "mylatency" using 1:($2/1) title "latency" with impulses ls 6' >> latency.gp

echo 'set term png' >> latency.gp
echo 'set output "latency.png"' >> latency.gp
echo 'replot' >> latency.gp

gnuplot latency.gp 2>&1 1>/dev/null
qiv latency.png &

Which produced this image:

1-10-50-0-latency.png

This shows that the capture files verify that there is about a 17ms baseline delay due to the 7ms serialization delay and 10ms additional delay setup on the wanlink. As I increased the jitter frequency, more packets experienced the 100ms jitter.

 

 

LANforge with Netperf-Wrapper

LANforge-ICE can be setup to be a controlled emulated network with impairments to demonstrate the effect of buffer sizes on network traffic and model the types of networks seen in the real world. Here is an example using netperf-wrapper as the traffic source.

Below is the network under test…created with LANforge-ICE. The two physical interfaces eth2 and eth3 are connected together with an emulated network consisting of a T1 link to a series of three routers with OC3 links between them. Each T1 wanlink is configured with 40ms of round-trip latency and each OC3 wanlink has 20ms of round-trip latency so that a ping from eth2 to eth3 on the traffic source machine shows about 122ms round-trip latency through the emulated network.

Netsmith configuration for Resource:  ice-si-dmz(1.2)  Version: 5.2.13_026

traceroute -n -i eth2 2.2.2.103
traceroute to 2.2.2.103 (2.2.2.103), 30 hops max, 60 byte packets
 1 1.1.1.1 41.100 ms 41.102 ms 41.089 ms
 2 10.0.0.2 61.094 ms 61.090 ms 61.081 ms
 3 10.0.0.6 81.117 ms 81.128 ms 81.118 ms
 4 2.2.2.103 122.190 ms 122.190 ms 122.181 ms

ping -I eth2 2.2.2.103
PING 2.2.2.103 (2.2.2.103) from 1.1.1.102 eth2: 56(84) bytes of data.
64 bytes from 2.2.2.103: icmp_req=1 ttl=61 time=122 ms
64 bytes from 2.2.2.103: icmp_req=2 ttl=61 time=122 ms
64 bytes from 2.2.2.103: icmp_req=3 ttl=61 time=122 ms
64 bytes from 2.2.2.103: icmp_req=4 ttl=61 time=122 ms
^C
--- 2.2.2.103 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 122.358/122.426/122.495/0.252 ms

Netserver and Netperf-Wrapper are running on a separate machine that is physically connected to eth2 and eth3 on the emulation machine. Netserver is running with the following command:

netserver -L 2.2.2.103

Netperf-Wrapper is run several times with the following command:

netperf-wrapper -H 2.2.2.103 –local-bind 1.1.1.102 -t test-tcpbi-<backlogbuffer> tcp_bidirectional -f plot

Between each netperf-wrapper run, the backlog buffers on the T1 wanlinks were modified to demonstrate the effect of buffer size in the network. All interfaces in this example have the default tx queue length set to 1000 packets.

The buffer sizes used were small at 8KB, large at 32KB and auto-sized which resulted in about 80KB backlog buffer.

Results are shown as combined box totals and combined totals for all three test runs. Links to json.gz files are at the bottom of the post.

tcpbi-combined-box

tcpbi-combined-totals

Now the same three network conditions with the rrul test with the following netperf-wrapper command:

netperf-wrapper -H 2.2.2.103 –local-bind 1.1.1.102 -t test-rrul-<backlogbuffer> rrul -f plot

Results with small 8KB buffers.

rrul-all-8KB

Results with large 32KB buffers.

rrul-all-32KB

Results with auto-sized buffers.

rrul-all-AUTO

Results as combined box totals.

rrul-combined-box

Links to json.gz files:

tcp_bidirectional-2014-09-22T111311.925620.test_tcpbi_8KB.json.gz

tcp_bidirectional-2014-09-22T111536.825484.test_tcpbi_32KB.json.gz

tcp_bidirectional-2014-09-22T111757.856991.test_tcpbi_AUTO.json.gz

rrul-2014-09-22T113207.197838.test_rrul_8KB.json.gz

rrul-2014-09-22T113358.991878.test_rrul_32KB.json.gz

rrul-2014-09-22T113549.828810.test_rrul_AUTO.json.gz

www.candelatech.com for more information on LANforge features.

github.com/tohojo/netperf-wrapper for more information on Netperf-Wrapper.