Subject: Re: Non-stupid measurements (Re: current got 'swappier'/slower.)
To: None <>
From: Paul Kranenburg <>
List: tech-kern
Date: 01/09/2004 12:16:38
> These are, of course, 4K/512 filesystems.  But until the frag size reaches
> the page size, the new code should do no _worse_ than the old code for this
> metadata-intensive workload, right?  That suggests to me that 15% is a
> pretty reasonable upper bound for systems like this; do you have systems
> that want more?

This is fine with me. I'll change the default vm.bufcache to 15.
Using this value will still leave the default maxvnodes settings (currently
computed to allocate 0.5% of physmem) too low.

Currently, I'm using 3KB as an estimate of the average buffer size based
observations on my machines. With that estimate, 0.8% of physmem should be
used to approximately maintain `maxvnodes > nbuf'.

Maybe it's better to couple these parameters directly, and just do

	`maxvnodes = <some_factor> * <estimated nbuf>'

in main(), and also come up with a way to dynamically adjust when necessary.