Subject: Re: Simple thought...
To: Andrew Gillham <email@example.com>
From: Frank Kardel <Frank.Kardel@Acrys.COM>
Date: 06/11/2002 07:05:25
Andrew Gillham <firstname.lastname@example.org> said:
> On Mon, Jun 10, 2002 at 03:46:55PM -0000, Frank Kardel wrote:
> > With UBC there is also another interesting parameter generating much
> > joy when increased:
> > sysctl -w kern.maxvnodes=64000 (for bigger machines arount 700MB RAM)
> > The data-pages now hang off the vnodes in UBC if i read the hints
> > right. Now the more vnodes you have the more data can be tacked onto them.
> > Increasing the buffer cache only helps meta data and takes memory away for
> > vnode data caching. Try to look at the "vmstat buf" statistics page and
> > for vnode data pages. My machine (768MB) usually caches around 500-600MB
> > file system data. And more - lingering vnodes do not need to be filled
> > from the buffer cache. Effects are that kernel ld-runs are completely
> > from memory (just written .o files) and the bufcache is only used for
> > moving meta data. Set me right if i overlooked something here.
> What is a reasonable max for this? I've cranked it up repeatedly on a busy
> system and even at 700,000 (yes 700k) I ended up with 690k+ vnodes in use.
I haven't looked too deeply into that stuff to far - so some of the vnode
guys might actually help out here.
Nevertheless i have seen some effects with large vnode caches. At least with
null-Layer mounts final unlinks seem to take ages (probably until a vnode is
actively reclaimed) until space is freed. This can be quite annoying. Maybe
that should be fixed. That hints for that behavior are that space freed via
a final unlink in the null Layer fs show up when someone does something like
a find ... -ls > /dev/null.
As for sizing that parameter. vnodes take up some memory (152 bytes for 1.6A -
compile kernel with
options KMEMSTATS and try vmstat -m and look for something like vnodepl).
Making that parameter expectionally big defeats somewhat the purpose of
as rarely used vnodes linger in memory basically using memory without any
real benefit. That memory could be used better by other subsystems. So I'd
assume that 700k vnodes is a tad too much 8-).
I think that parameter is sized about right when the vnode page count reach
maximum memory use in normal operations and that depends a little on the
average file size of the commonly used files. So news servers could benefit
from a few more vnodes as could src-file servers.
I hope i could give some infos about these parameters - maybe some of the
vfs guys could provide a mini tuning guide here (and add the to the system
Acrys Consult GmbH