Subject: Raising NFS parameters for higher bandwith or "long fat pipe"
To: None <tech-net@netbsd.org>
From: Jonathan Stone <jonathan@DSG.Stanford.EDU>
List: tech-kern
Date: 11/25/2003 17:37:13
[bcc'ed to tech-kern for the nfs-as-kthread discussion ]

I frequenlty overload NetBSD-current NFS servers in configurations
where (for example) single servers have several FC controllers, dozens
of 10,00RPM FC disks, and tens-to-hundreds of client threads banging
on the disks.  (Think specsfs, for those of you familiar with that.)


The NFS parameters in -current aren't well-suited to that workload: in
particular, the compiled-in upper bounds on nfsd threads, and on
(client-side) readahead, are too low.  I find 64 nfsd is not nearly
sufficent; I ususally run 128 or 256. I'd also like the amount of
read-ahead to be sufficent for latencys in the 10ms to 20ms range.

I dont' propose to change any of the default values,
but I would like to raise some compiled-in upper bounds:

usr.sbin/nfsd/nfsd.c: MAXNFSDCNT to 128
	      still on the low side, for specsfs runs.

sys/nfs/nfs.h NFS_MAXRAHEAD from 4 to 32 
	      (32 requests * 32k reads comes to 1 Mbyte), barely
	      enough to fill a 10ms latency at 100 Mbyte/sec.


I'm also wondering how much we could gain by turning worker nfds's
(.e., not the master process which listens for inbound TCP
connections) into kthreads.  Seems like the same approach used for
nfsiod's would also work here; and (on some arches at least) switching
from one kthread to another should not require an mmu context-switch.
Given a hundred-odd nfsds and a machine not doing much else, the
savings would add up.

Any comments on that?  (Jason?)