Subject: vnode usage and reclaimation - feels like deadlocking
To: None <tech-kern@netbsd.org>
From: Stephen M. Jones <smj@cirr.com>
List: netbsd-users
Date: 01/16/2004 13:21:06
I've found some interesting numbers on vnode usage on three seperate
machines with different tasks. This sampling was done over the
course of 30 minutes on four machines.
First machine is a fileserver with 5 active filesystems, 407 websites
and 289 active users in the past 16 hours (this machine was just rebooted
16 hours ago (non-crash related) and can easily do 40 days uptime without
any problems)
kern.maxvnodes = 10764 (unmodified)
low: Fri Jan 16 18:27:17 UTC 2004 10755 active vnodes
high: Fri Jan 16 18:33:18 UTC 2004 10761 active vnodes
Second machine has an uptime of 91 days with 1 active filesystem, 63
websites and 1,428 ftp sessions in the past 4 days.
kern.maxvnodes = 17638
low: Fri Jan 16 18:27:03 UTC 2004 17630 active vnodes
high: Fri Jan 16 18:16:02 UTC 2004 17636 active vnodes
Third machine is an NFS client mounting 10 remote file systems, one
local, 1,678 websites, 60 active users with ~500 login sessions in
the past 2 hours. It has only been up for 2 hours.
kern.maxvnodes = 21589
low: Fri Jan 16 18:10:30 UTC 2004 13174 active vnodes
high: Fri Jan 16 18:39:33 UTC 2004 18953 active vnodes
now: Fri Jan 16 18:53:41 UTC 2004 21453 active vnodes
The usage on the NFS client is considerably higher than the other two
machines. What I usually see is this huge rush from the moment it
boots with a handful of vnodes used to being right at edge of the
kern.maxvnodes limit within 2 to 3 hours.
Here is a fourth machine that has been up for 11 hours, with 55 active
users, 737 login sessions, 1,304 websites and the same 10 file systems
mounted.
kern.maxvnodes = 21589
21563 active vnodes
The symptoms of the vnode deadlock are similar to those when maxuproc is
reached, meaning the user can not fork a new process and usually can not
even login until a vnode is reclaimed and made available. This wait can
be just a few seconds to what seems forever as the requests for new vnodes
pile up.
On the lower used hosts above the number of vnodes never seems to be
as high. In a previous message I stated that I tried different values for
maxvnodes, but the case always seems to be the same where they are used
up to the point of exhaustion where the seemingly deadlock issue pops up.
I do know that when vnodes are put into use, they are *only* reclaimed
when the system determines that resource is no longer needed .. otherwise
they will be used for the duration of uptime.
Another interesting thing to note is that this only seems to be an issue
with the NFS clients and not the server.
The server:
kern.maxvnodes = 65536
65524 active vnodes
I do have /etc/sysctl.conf setting kern.maxvnodes to 65536. However, the
server seems to have no problems managing its vnodes and actually just
experienced 37 days of uptime on a 1.6.2rc3 kernel (please note that
I had kern.maxvnodes set to 32768 during that 37 day run).
Some questions .. if an NFS client has a decent amount of memory:
(third host listed):
load averages: 2.91, 3.30, 3.40 19:09:51
274 processes: 1 runnable, 272 sleeping, 1 on processor
Memory: 562M Act, 287M Inact, 2696K Wired, 8896K Exec, 612M File, 32M Free
Swap: 2048M Total, 2048M Free
What would be a good value for kern.maxvnodes? Is that even an issue?
Can it go beyond 65536? (should I even be setting it?) Does it have a max
value? I realise a vnode represents all sorts of objects (files, sockets,
symlinks .. fifos) so it will vary if a machine is totally pigged out and
in that case, should you have your kern.maxvnodes set to take care of
all your potential resources? Is reclaimation done any differently on
an NFS client or vnodes that map to NFS objects?