LANforge Virtual Appliance

virtualization-icon-300pxWe just uploaded a LANforge virtual appliance in OVA format. Please give it a try, we need some feedback on how well it is to test out our product.

This is a Fedora 21, LANforge version 5.3.4 guest.

Advertisements

LANforge with Netperf-Wrapper

LANforge-ICE can be setup to be a controlled emulated network with impairments to demonstrate the effect of buffer sizes on network traffic and model the types of networks seen in the real world. Here is an example using netperf-wrapper as the traffic source.

Below is the network under test…created with LANforge-ICE. The two physical interfaces eth2 and eth3 are connected together with an emulated network consisting of a T1 link to a series of three routers with OC3 links between them. Each T1 wanlink is configured with 40ms of round-trip latency and each OC3 wanlink has 20ms of round-trip latency so that a ping from eth2 to eth3 on the traffic source machine shows about 122ms round-trip latency through the emulated network.

Netsmith configuration for Resource:  ice-si-dmz(1.2)  Version: 5.2.13_026

traceroute -n -i eth2 2.2.2.103
traceroute to 2.2.2.103 (2.2.2.103), 30 hops max, 60 byte packets
 1 1.1.1.1 41.100 ms 41.102 ms 41.089 ms
 2 10.0.0.2 61.094 ms 61.090 ms 61.081 ms
 3 10.0.0.6 81.117 ms 81.128 ms 81.118 ms
 4 2.2.2.103 122.190 ms 122.190 ms 122.181 ms

ping -I eth2 2.2.2.103
PING 2.2.2.103 (2.2.2.103) from 1.1.1.102 eth2: 56(84) bytes of data.
64 bytes from 2.2.2.103: icmp_req=1 ttl=61 time=122 ms
64 bytes from 2.2.2.103: icmp_req=2 ttl=61 time=122 ms
64 bytes from 2.2.2.103: icmp_req=3 ttl=61 time=122 ms
64 bytes from 2.2.2.103: icmp_req=4 ttl=61 time=122 ms
^C
--- 2.2.2.103 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 122.358/122.426/122.495/0.252 ms

Netserver and Netperf-Wrapper are running on a separate machine that is physically connected to eth2 and eth3 on the emulation machine. Netserver is running with the following command:

netserver -L 2.2.2.103

Netperf-Wrapper is run several times with the following command:

netperf-wrapper -H 2.2.2.103 –local-bind 1.1.1.102 -t test-tcpbi-<backlogbuffer> tcp_bidirectional -f plot

Between each netperf-wrapper run, the backlog buffers on the T1 wanlinks were modified to demonstrate the effect of buffer size in the network. All interfaces in this example have the default tx queue length set to 1000 packets.

The buffer sizes used were small at 8KB, large at 32KB and auto-sized which resulted in about 80KB backlog buffer.

Results are shown as combined box totals and combined totals for all three test runs. Links to json.gz files are at the bottom of the post.

tcpbi-combined-box

tcpbi-combined-totals

Now the same three network conditions with the rrul test with the following netperf-wrapper command:

netperf-wrapper -H 2.2.2.103 –local-bind 1.1.1.102 -t test-rrul-<backlogbuffer> rrul -f plot

Results with small 8KB buffers.

rrul-all-8KB

Results with large 32KB buffers.

rrul-all-32KB

Results with auto-sized buffers.

rrul-all-AUTO

Results as combined box totals.

rrul-combined-box

Links to json.gz files:

tcp_bidirectional-2014-09-22T111311.925620.test_tcpbi_8KB.json.gz

tcp_bidirectional-2014-09-22T111536.825484.test_tcpbi_32KB.json.gz

tcp_bidirectional-2014-09-22T111757.856991.test_tcpbi_AUTO.json.gz

rrul-2014-09-22T113207.197838.test_rrul_8KB.json.gz

rrul-2014-09-22T113358.991878.test_rrul_32KB.json.gz

rrul-2014-09-22T113549.828810.test_rrul_AUTO.json.gz

www.candelatech.com for more information on LANforge features.

github.com/tohojo/netperf-wrapper for more information on Netperf-Wrapper.