Subject: Re: Page daemon behavior part N
To: Charles M. Hannum <root@ihack.net>
From: Jason R Thorpe <thorpej@zembu.com>
List: tech-kern
Date: 01/25/2001 11:26:02
On Thu, Jan 25, 2001 at 06:41:28PM +0000, Charles M. Hannum wrote:

 > It strikes me as highly improbable that will help much.  The real
 > problem is simply that too many writes are being cached, and forcing
 > other data out of memory.  We need to be more aggressive about
 > scheduling writebacks, especially when memory is low, and giving some
 > preference to writes over reads.  Merely inactivating pages slightly
 > faster isn't going to help much, because they're *already* getting
 > inactivated pretty quickly.
 > 
 > Interestingly, if you go further back, there was a hack to explicitly
 > lower the caching priority of full blocks written by FFS, using B_AGE.
 > This would have had the effect of causing writebacks to happen faster,
 > as well as being a slightly different way of accomplish the
 > `immediately inactivate it' hack.

Immediately deactivating the pages does have the effect of causing them
to be cleaned sooner (if you don't they have to be scanned on the active
list 2 times -- once to clear the reference, once again to move it to
the inactive list).

There is definitely a problem with the async flushing in the file
system write calls, tho.

If you look at ufs_readwrite.c:300, there is a block of code that cleans
the object.  If sync I/O, we sync each UBC_WINSIZE block in the loop.  If
async I/O, it only starts a clean if you've *crossed* a 64k boundary in
the file.

Seems like we should move the clean block outside the loop, remembering
the original offset, and then cleaning the entire range.  That may allow
us to build larger clusters for larger writes, and also reduce the number
of sync I/Os for larger sync writes.

-- 
        -- Jason R. Thorpe <thorpej@zembu.com>