Subject: Re: Limitations of current buffer cache on 32-bit ports
To: Chuck Silvers <chuq@chuq.com>
From: Thor Lancelot Simon <tls@rek.tjls.com>
List: tech-kern
Date: 07/24/2002 00:47:49
On Tue, Jul 23, 2002 at 09:41:21PM -0700, Chuck Silvers wrote:
> On Wed, Jul 24, 2002 at 12:33:06AM -0400, Thor Lancelot Simon wrote:
> > Okay, but if we continue to cache directories and other metadata by caching
> > the filesystem blocks they came from, on modern disks we will continue to
> > need on the order of 32K per.  Not ideal, no?
> 
> we'd only need to do that if the directory actually contains 32k of data.
> the buffers for directory data aren't always whole blocks... if the
> directory ends in a fragment, then the last buffer only need be large
> enough to contain the fragment.  so for 1k frag size, if the directory
> contains just 1k of data, then the buffer need only use 1k of virtual space.
> we'll probably want to round to a page, but that's still a vast improvement.

You can't have an FFS filesystem with a 32K blocksize and a 1K frag size.  
The best you can do is a 4K frag size, if the disk geometry forces you to
use 32K blocks.

The buffer cache statistics on nbanoncvs are pretty instructive here.  We
reduced MAXBSIZE to 32K, but those buffers are running about 77% utilized.
We know there's not much but directories in them; though I get confused when
trying to read the code, the statistics certainly seem to show that, at
present, we do in fact cache directories in full blocks (since otherwise, how
would the vast majority of our 400MB of buffer cache actually be in use)?

Couldn't the caching of directory data be decoupled from the actual physical
structure in which it lived on the disk?  That would seem to offer the most
hope for efficient use of cache, to me, even in the presence of stupid
filesystems.

Thor