Subject: Non-stupid measurements (Re: current got 'swappier'/slower.)
To: Paul Kranenburg <pk@cs.few.eur.nl>
From: Thor Lancelot Simon <tls@rek.tjls.com>
List: tech-kern
Date: 01/07/2004 15:37:17
[I sent Paul some very stupid, late-night commentary on this subject
 a couple of days ago, thus the message subject.] :-)

On Tue, Jan 06, 2004 at 03:31:03PM +0100, Paul Kranenburg wrote:
>
> The effect of the new buffer cache code on the vnode cache has already been
> noted. I would expect the drag you're experiencing to be largely eliminated
> if you'd revert the maximum memory use for the buffer to historic levels,
> i.e. to around 5% of physical memory.  To verify that this is the case,
> could you do
> 
> 	`sysctl -w vm.bufcache=5'

So, I did some measurements on anoncvs.netbsd.org (which is still running
the 1.6 branch).  It has 4k/512 filesystems, 12K MAXBSIZE, and 3.5GB of
memory, which is just enough to cache the file data for the repository.
Its load is currently artifically limited to 50 simultaneous users
because directory creation in /tmp by cvs diff thrashes the metadata cache.

Overall buffer memory utilization for the machine is 22%.  In the / filesytem,
which is the system executables only, utilization % is 99% -- I assume that
the vnodes for the directories used here remain in cache, and the buffers
in question hold other FS metadata.  In /anon-root, which holds the
repository, utilization % is 35%.  Using that 35% figure as a reasonable
upper bound for this sort of workload, I get (0.35 * 655MB) = 229MB used,
which works out to almost exactly 15% of the total 3.5GB of memory in the
machine, most of which is used to cache the file data in question.

The utilization in /anon-root/tmp is basically all single-frag directories,
so I would expect the new code to win _more_ for this part of the workload;
it's probably the case that 15% is plenty of memory to allow us to support
many more users with the current machine than we do with the old code.

These are, of course, 4K/512 filesystems.  But until the frag size reaches
the page size, the new code should do no _worse_ than the old code for this
metadata-intensive workload, right?  That suggests to me that 15% is a
pretty reasonable upper bound for systems like this; do you have systems
that want more?

-- 
 Thor Lancelot Simon	                                      tls@rek.tjls.com
   But as he knew no bad language, he had called him all the names of common
 objects that he could think of, and had screamed: "You lamp!  You towel!  You
 plate!" and so on.              --Sigmund Freud