Main Menu

How Fast is Your Gigabit?

Started by Bloody Jack Kidd, February 12, 2010, 12:24:20 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Bloody Jack Kidd

We have a tendency to monitor almost everything around here and I tend to also benchmark and torture test when time permits... and one of my favourite test subjects is the network fabric.  In the good old days of FastEthernet, I came to the conclusion (or perhaps agreement) that real world throughput tops out around 80Mbps, regardless of the speed of drives, cpus, colour of cabling etc. - 80Mbps is pretty much top end due to protocol overhead and such.  

With the move to Gigabit, I admit, I was expecting some astounding numbers, surely not 1000Mbps, or even 800Mbps, but something big.  I don't see that though.  I rarely see much more than 100Mbps.

As it would happen... there is many things that have to fall into place to get the most out of your Gigabit network.


  • jumbo frames need to be supported and enabled on all the NICs and switches with an agreeable size MTU, something above 1500
  • switches need good backplanes - bargain switches usu. don't have the guts
  • cabling needs to be good quality and punched down properly CAT6 not absolutely req'd but doesn't hurt
  • Offload engines should be enabled if possible, moving bits has become quite CPU intensive
The problem I find is the bigger networks (like ours) have so much legacy equipment still attached, making all the gigabit tweaks might leave certain hosts orphaned from the network.

Anyway - if any of you have done any real world tests, let us know what kind of throughput you see.
Sysadmin - Parallel42

Hans Manhave

How would one go about taking measurements?  Is there an app for that?  In the Windows environment, if possible.
Fantasy is more important than knowledge, because knowledge has its boundaries - Albert Einstein

Bloody Jack Kidd

mostly for outright network benchmarking I have used netperf and iperf, both are similar, dunno if they are Windows ported.

I'd really like to get my hands on something for outright Windows CIFS testing, b.c. I suspect a network i/o issue at one of our locations and it's driving me nuts.
Sysadmin - Parallel42

admin

between a couple hosts at parallel42:

sauropod# iperf -c pterosaur
------------------------------------------------------------
Client connecting to pterosaur, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.11 port 51083 connected with 192.168.1.250 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec    108 MBytes  90.9 Mbits/sec
sauropod# iperf -c pterosaur
------------------------------------------------------------
Client connecting to pterosaur, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.11 port 64849 connected with 192.168.1.250 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec    112 MBytes  94.3 Mbits/sec


Parallel42 is currently using FE switching despite the hosts mostly being GigE
The Management

...you've got to ask yourself one question: "Do I feel lucky?" Well, do ya, punk?

Bloody Jack Kidd

now iperf between a windows box (acting as server) and a BSD box (client) I'm seeing:


C:\utils>iperf -s --mss 1460
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[768] local 10.39.0.122 port 5001 connected with 10.39.0.111 port 63033
[ ID] Interval       Transfer     Bandwidth
[768]  0.0-30.0 sec   700 MBytes   196 Mbits/sec
[736] local 10.39.0.122 port 5001 connected with 10.39.0.111 port 62251
[ ID] Interval       Transfer     Bandwidth
[736]  0.0-30.0 sec   739 MBytes   207 Mbits/sec
[748] local 10.39.0.122 port 5001 connected with 10.39.0.111 port 64364
[ ID] Interval       Transfer     Bandwidth
[748]  0.0-30.0 sec   688 MBytes   192 Mbits/sec
[768] local 10.39.0.122 port 5001 connected with 10.39.0.111 port 62240
[ ID] Interval       Transfer     Bandwidth
[768]  0.0-30.0 sec   717 MBytes   201 Mbits/sec


which according to Task Manager is ~ 21-23% of network capacity... now the interesting part is that I never see numbers even approaching this when the server is sending data to clients- sometimes benchmarks can be misleading.

This is full end-to-end Gigabit and high-end servers.
Sysadmin - Parallel42

Bloody Jack Kidd

Did some more benchmarks over the weekend, adjusted iperf to use a 64KB TCP Window Size and saw a significant increase in throughput, more that doubling the previous scores.  But as with most benchmarks, it's synthetic and not real world.  Nevertheless, it was nice to see 500Mbps+ thru the link.

The testing also hammered the daylights out of the vCPU on the BSD VM... it's not easy pushing that many packets.
Sysadmin - Parallel42