Subject: CVS commit: src/sys/kern
To: None <source-changes@NetBSD.org>
From: Thor Lancelot Simon <tls@netbsd.org>
List: source-changes
Date: 01/08/2004 23:41:14
Module Name:	src
Committed By:	tls
Date:		Thu Jan  8 23:41:14 UTC 2004

Modified Files:
	src/sys/kern: vfs_bio.c

Log Message:
Add pool_reclaim() on pool to which we just pool_put() a buffer in
buf_mrelease().  Without this, though the pages are returned to the
relevant *pool*, they are never available for any other use in the
system.

Now the backpressure on the physical size of the buffer cache through
the buf_drain() call in the pagedaemon works correctly.  If anything,
it may be a bit more aggressive than intended.  On my 256MB system,
with vm.bufcache set to the default 30% of physmem, a kernel with this
fix can do 5 simultaneous config/makedep/builds of different NetBSD
kernels in 1313 seconds; with the "traditional" buffer cache code it
requires 1320 seconds.  Running "find / -type d -exec ls -l {}" while
the build is going demonstrates that the backpressure is working
correctly: free memory oscillates slowly between close to none and
the UVM target free, and vmstat -m shows a large number of releases
for the buffer pools.

For future work: how is "bufpl" memory returned to the system?  This
is not obvious to me (I must be looking in the wrong place).  Also,
buf_mrelease() is also called from brelse() in some cases.  Would it
be better to add a pool flag causing automatic release of full pages
as they become available (not fragmented)?  Jason Thorpe proposed this
and it seems more elegant than cleaning the _entire_ pool only upon
memory pressure.

Greg Oster did a lot of the work of figuring this out.  Jason proposed
the use of pool_reclaim as a way to fix it.


To generate a diff of this commit:
cvs rdiff -r1.104 -r1.105 src/sys/kern/vfs_bio.c

Please note that diffs are not public domain; they are subject to the
copyright notices on the relevant files.