Subject: Re: NFS questions ..
To: Stephen M. Jones <smj@cirr.com>
From: John Franklin <franklin@elfie.org>
List: port-alpha
Date: 01/08/2003 20:25:19
On Wednesday, Jan 8, 2003, at 02:52 US/Eastern, Stephen M. Jones wrote:
> Hi .. Besides MP hangs I've experienced, another that I've run into
> that just about everyone else has is NFS hangs.  Maybe this one could
> be quick and easy to sort out.
>
> I have a few questions ..
>
> 1) Would an NFS server running on a machine built on 1.5.2 (w/ a 1.5.4=20=

> kernel)
>    have compatibility issues with a client running 1.6?

It may get swamped, but there are no compatibility issues.  See below.

> 2) How do you decide how many severs to spawn?  number of exported=20
> filesystems
>    or number of clients?

Number of independent I/O sessions, which is more number of=20
simultaneous user tasks across all clients that you expect to be using=20=

the machine.  For example, you could get all of your NFS servers=20
running if you NFS mount /usr/pkgsrc, have built but never cleaned a=20
handful of packages and ran this on the client:

foreach i (/usr/pkgsrc/*/*/work)
rm -rf $i &
end

This is effectively a parallel make clean for all of pkgsrc and NFS=20
server endurance test all in one.  Run top on the server while doing=20
this and you'll see all your nfsd processes getting some chip time.

If this is a dedicated NFS server for multiple clients, there's little=20=

reason not to spawn off the maximum number of servers which is 20.

> 3) are there any parameters that can be tweaked?  I know of=20
> vfs.nfs.iothreads
>    (should this correspond to the number of nfsd servers running?)

On the server what do you have "options BUFCACHE" set to in the kernel=20=

config file?  In 1.5.x this controls both the number of network buffers=20=

and the number of metadata buffers available in the kernel.  Run=20
"systat bufcache" and you'll probably see over half of the buffers=20
assigned to / and most of the rest to your busiest export.  Doing a ls=20=

-lR /myremotemount on one of the clients will show the /myremotemount=20
volume trying to get more metadata buffers but they keep getting=20
swallowed back by /.    Those buffers charged to / are NFS (and other?)=20=

network buffers.

By default BUFCACHE is 5%.  Try setting options BUFCACHE=3D30.  This =
sets=20
the BUFCACHE to use 30% of available memory.  The kernel can only use=20
128M of "BUFCACHE", so if 30% exceeds this, crank it down so it falls=20
under 128M.

Using softdeps (if you don't already) on the server can also improve=20
system performance.

> 4) On the client side, I'm not sure if this is documented clearly, but=20=

> how
>    can the options in mount_nfs be used?  (fstab's manpage gives no
>    clues)  If these are setable somewhere, I would guess thats why=20
> setting
>    the server to only -t would cause the clients to scratch their =
heads
>    (default behaviour is to use UDP).

UDP is fine for local switched networks as yours is.  TCP has too much=20=

overhead when packet loss is low.   I use it here on a 100Base-T=20
network with no problems.

>    fstab options?  rw,-D5,-T,-b,-i,-r=3D8192,-w-8192
>
> The problem I'm seeing is sometimes an NFS server will not be=20
> responding
> and then come back .. sometimes the 1.6 client hangs after 3 or so =
days
> of uptime while the 1.5.x client keeps going .. sometimes I see nfsd=20=

> send
> error 55 on the server.. many times I see
> nfs server sdf1:/sys: not responding
> nfs server sdf1:/sys: is alive again

This is a symptom of BUFCACHE being too low.  The server is being=20
starved for metadata buffers and so it drops requests.  To the client=20
this appears as "not responding" then nearly immediately "is alive=20
again" once some data has been written to disk and the buffers for it=20
are freed.

> uvn_attach: blocked at 0x0xfffffc0020b41ba8 flags 0x4
> uvn_attach: blocked at 0x0xfffffc000b926de0 flags 0x4

This I've never seen before.

jf
--=20
John Franklin
franklin@elfie.org
ICBM: 35=B043'56"N 78=B053'27"W