Subject: Re: LFS performance and kern.maxvnodes
To: Adam Hamsik <email@example.com>
From: Simon Burge <simonb@NetBSD.org>
Date: 09/11/2007 11:41:03
Adam Hamsik wrote:
> Today I found that is good idea to increase this values also if
> system is under the bigger filesystem activity e.g. cvs update. With
> running cvs update system is slow in fs operations like directory
> list. After increas of kern.maxvnodes to 65000 (i know this is
> probably too much;) system worked smoothly again.
> We should increase default kern.maxvnodes to something bigger than
> 45692 or at least document this somewhere so people can tune their
> netbsd better. Is tunning netbsd chapter in our guide correct place ?
> if yes I will write a patch .
I generally use "RAM MB * 128" for sizing maxvnodes, although on 512MB
RAM machines I tend to use 128k as a minimum as well.
Note that configuring extra vnodes will have an impact on memory usage.
For example, on my 1GB RAM laptop, "vmstat -mW" shows that the 4 pools
in use that are affected by maxvnodes (ncachepl, dino1pl, ffsinopl,
vnodepl - and this is dependent on which filesystems you use) are using
80MB of RAM (which is actually more than I thought it would be using!).
So there is some sort of trade-off in going too large...
Certainly I think it's worth an entry in the tuning chapter.