Subject: Re: Raising NFS parameters for higher bandwith or "long fat pipe"
To: None <tech-net@NetBSD.org>
From: Jonathan Stone <jonathan@DSG.Stanford.EDU>
List: tech-net
Date: 12/06/2003 11:18:38
In message <C6A36295-245A-11D8-A3C9-000A957650EC@wasabisystems.com>,
Jason Thorpe  writes:

[Jonathan proposes more NFS client-side readhead]

The real gotcha is that as far as I can tell, NetBSD's NFS client code
does at most one read-ahead opeartion.  I tried an (old) iozone over
an NfS mount from a NetBSD 1.6.1 client, a NetBSD-current 1.6GZF
server. Both machines were 500MHz-class P3s with Tulip interfaces. For
both TCP and UDP mounts, I observed about a 3:1 ratio of write:read
throughput.  Using "top" on the server shows  only one or two  nfsd's busy.

Contrast sys/nfs/nfs_bio.c:nfs_bioread() between NetBSD and FreeBSD-4:
the FreeBSD version has a loop that fires off up to nmp->nm_readahead
extra reads. (The code fragment later in nfs_bioread() that fires of a
single read-ahead, is common across NetBSD and FreeBSD 4.x).

The performance penalty leaves me very unhappy.  What's the collective
opnion about incorporating a FreeBSD-style NFS read-ahead?

[kthread "process pools" for nfsd/nfsiod...]

It's almost trivial to set up code to create kthreads running nfssvc
The real work is in adding a sysctl knob that can _reduce_ the
number of kthreads (you need a list, or some other way to find threads
and kill them).