Current-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Why is my gigabit ethernet so slow?



    Date:        Tue, 26 Jan 2010 22:20:54 +0200
    From:        Martti Kuparinen <martti.kuparinen%iki.fi@localhost>
    Message-ID:  <4B5F4EA6.7080007%iki.fi@localhost>

  | Data    Type    #1      #2      #3
  | 
  |   2 GB   TCP     574.12  570.93  TBD
  |   2 GB   UDP     869.14  867.37  867.34

Ignoring the TSO problems (which could be almost anything), that result
and the similar one for the longer test, tell you immediately that the
"problem" is just the TCP window size.

TCP maximum throughput is set by a combination of the window size and the
RTT, making the window bigger increases throughput, as does making the
RTT smaller.  Since altering the RTT significantly is generally hard (you
could try much shorter cables...) the window size is the one that you can
generally alter to get better performance.

The basic rule for TCP is that the maximum throughput is 1 window of
data for each RTT -- there's no way to go faster than that, regardless
of the link speed (though obviously the link speed sets an upper bound on
the available throughput).

So, if you have a 500us RTT (0.5ms) you get to do 2000 RTT's / second.
If the window size is 32KB, then the max throughput from TCP is
32*1024*8 * 2000  (32KB converted to bits, 2000 times a second).
That's 524288000 (524 Mbps).   Make the window 64KB and you could get 1.4Gbps,
but clearly not on a 1Gbps link...   (Those are data rates, the bps will be
higher once you add in TCP, IP and link layer headers, which all consume
bandwidth.)   From your results, I'd guess that ttcp is using a 32KB window
by default, and that your RTT is probably about 440 us (plus or minus a
little depending upon whether you have hardware checksums enabled or not - it
probably is about 470us in the all software case).

  | Now, I wonder what kind of speed would I get with 10 Gbps cards... 

If you change nothing else, UDP will go faster (it has no built in
flow control), and TCP will be the same as above, more or less - a faster
link will mean a minor reduction in the RTT, but the RTT at this kind of
data rate is almost all speed of light delay, and processing at the other
end (turnaround time) - transmission delays are a negligible factor, a
1500 byte packet at 1Gbps takes (if I calculated it correctly) about 12us
to transmit, hence about 25us of the total RTT is transmission delay,
(packet into the switch and out again to the server - we can ignore the
transmission delay on the ACK packet coming back, that's in the noise).
Change to 10Gbps and that 25us becomes 2.5us - a saving of about 10us
which compared with the 400-500us overall RTT is barely noticeable.

If you're using an expensive switch that does early forwarding (that is,
it starts sending the incoming packet as soon as it has received the
destination ethernet address), then the transmission delay would be
just half that (it occurs just once per packet, rather than twice)
and so the savings just half from going to a faster link speed.

To over-fill a 10Gbps link with a single TCP stream you need to set the
window size to about 630KB (a little less would do probably, but not much),
assuming the same RTTs as above.  That is, assuming your CPUs can keep up.

You can mostly ignore the recommendations to turn on hardware checksums, etc,
those are going to make a noticeable difference only when CPU processing
time is the bottleneck, which you already determined is not the case here
(that might make a bigger difference at 10Gbps speeds though).  For production
use you probably want those enabled, so your systems have more CPU to use
for other work, but for benchmarks like this, as you detected, you're barely
going to see a difference (the turnaround time can be fractionally less,
which will decrease the RTT by a few microseconds, which will get you just
slightly faster - as you measured - but nothing like the factor of two that
doubling the window size achieves - or would achieve if the available link
bandwidth permitted it ... the UCP rates show that you're going to hit a
wall around 870Mbps which is where the link is full (add TCP, IP, and link
level headers, (don't forget the TCP options needed for window scaling) and
inter-packet gaps, and 870Mbps data rate will be consuming close enough to the
entire 1Gbps that is available).

kre



Home | Main Index | Thread Index | Old Index