Subject: Re: Raising NFS parameters for higher bandwith or "long fat pipe"
To: Manuel Bouyer <bouyer@antioche.eu.org>
From: Jonathan Stone <jonathan@DSG.Stanford.EDU>
List: tech-net
Date: 12/07/2003 17:17:33
In message <20031207133149.GA1640@antioche.eu.org>Manuel Bouyer writes

>I'd like to see this. I suspect solaris is doing something similar.

I havent measured Solaris. I know Linux can fire up off to 1Mbyte
(256k*4k pages) of writes before it blocks waiting for the server
to respond.


>An Ultra/5 running solaris7 can read/write at full 100Mb/s to a NetBSD NFS
>server. NetBSD 1.6.x boxes (i386 and sparc64) can't do more than 70Mb/s readin
>g
>and even less writing (I didn't look if current is better in this area
>recently).

If the clients do large I/Os, and you have enough nfsio threads on the
clients, I would expect write throughput to be higher than read
throughput.  Were your read results for the first read of a file after
a mount, or for a re-read? I'm sure you know client-side caching means
a re-read often doesn't touch the server, but its easy to miss.  I
usually unmount, remount, then do

	time dd bs=128k if=... of=/dev/null

to estimate first-read throughput. You could also try iozone:

  On the client, do: sysctl -w vfs.nfs.iothreads = 32
  On the server, make sure there are at least 32 nfsd threads
        (fire of more copies of "nfsd -tun 20" if necessary)

  Then do: iozone -Q -Racb 

this will take a long time, and produce lots of output; you probably
want to redirect stdout/stderr and peruse later.  Again, for NFS, the
read data may reflect NFS-client-side caching, not actual I/O.