On Sat, Dec 07, 2013 at 12:38:42AM +0100, Johnny Billquist wrote:
You know, you might also hit a different problem, which I have had on
many occasions.
NFS using 8k transfers saturating the ethernet on the server, making the
server drop IP fragemnts. That in turn force a resend of the whole 8k
after a nfs timeout. That will totally kill your nfs performance.
(Obviously, even larger nfs buffers make the problem even worse.)
That wasn't the problem in this case since I could see the very delayed
responses.
That is a big problem, I've NFI why i386 defaults to very large transfers.
Even with an elevator scan algorithm and four concurrent nfs clients,
you're disk operation will complete within a few hundred ms at most.
This was all from one client. I'm not sure how many concurrent NFS
requests were actually outstanding - it was quite a few.
I remember that the operation was copying a large file to the nfs server,
the process might have been doing a very large write of an mmaped file.
So the client could easily have a few MB of data to transfer - and be
trying to do them all at once.
Thinking further, multiple nfsd probably help when there are a lot more
reads than writes - reads can be serviced from the server's cache.