[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: [SOLVED]Re: Only 8 MB/sec write throughput with NetBSD 5.1 AMD64
On 10/9/2011 2:12 PM, Thor Lancelot Simon wrote:
On Sun, Oct 09, 2011 at 12:19:59PM -0700, Tony Bourke wrote:
changing MTU won't help really.
Do you have data on which to base this conclusioN?
It's been long assumed that to get better throughput, enable jumbo
frames (some switches support up to 12,000 byte frames, although jumbo
frames is normally limited to 9,000 bytes). While that may have been
true in the past (to tell the truth, I'd always parroted the assumption
myself, never having tested it or seen any evidence), it doesn't seem to
be true today.
Here are a couple of benchmarks to support that conclusion:
Jason Boche tests 9,000 byte frames with iSCSI, NFS and vMotion. The
results were mixed, but none of the results showed Jumbo frames having a
huge impact (best increase in performance was 7% for one metric, rest
were nearly even).
NetApp and VMware also do joint performance reports comparing various
protocols. They tested NFS and iSCSI with and without Jumbo frames
(jumbo frames actually hurt iSCSI performance a bit). Overall, not a
it would also cause lots of other problems (like fragmentation over your
No. That is what path MTU discovery is for.
MTU discovery doesn't work reliably. All you need is a site or admin
along the path to disable ICMP.
Given that there's no real performance gain, and the problems that it
can potentially cause with MTU mismatches, more and more networking
administrators are choosing to leave it off these days.
and every device on your deity would need to be set for the same MTU.
I am not sure what deity is involved with this, except perhaps for the god
of unclear thinking (how many times have I tried to get him out of my
personal pantheon? it's not working)
Sorry, I typed that on my phone while in a cab. Should have said "device".
Plus you should be able to push way more bandwidth without it.
With a laptop disk drive as the data sink?
He'd been able to get higher throughput on Linux and Windows 7.
higher MTUs used to be a way to increase performance for things like
iSCSI but that's not really the case anymore, even with 10 Gigabit
Ethernet. So even 10 gigabit environments choose not to do jumbo frames.
You're right, my math was off. Still, PCI 32-bit can't max out a 1
Gigabit link, especially considering it's a shared bus.
If your network card is PCI, your bandwidth will be limited to about
200-250 megabits per second (which is about 25-35 MBytes/s) because
of the bus speed.
The claim is false. Even giving a conservative 80MByte/sec estimate for
33MHz 32-bit PCI (the theoretical maximum is 132MByte/sec), and allowing
for some adapter overhead, that's well north of 500 megabits per second
-- and, in fact, 400-500 megabits per second can easily be achieved with
such adapters, under NetBSD. Clearly, either 64 bit or 66MHz PCI (never
mind 64 bit, 133Mhz PCI-X) is sufficient to fill a gigabit link or come
very, very close (and indeed it is easy to reproduce this result with
real hardware under NetBSD as well).
PCIe will handle full duplex 1 gigabit no problem.
The claim is true.
Main Index |
Thread Index |