Subject: Re: Unified Buffer Cache 1st snapshot
To: None <>
From: Chuck Silvers <>
List: tech-kern
Date: 09/21/1998 07:10:37
Stefan Grefen writes:
> In message <>  Chuck Silvers wrote:
> > hi folks,
> > 
> [...]
> > the interfaces for the kernel to access the page cache are ubc_alloc()
> > and ubc_release() (ala segmap_getmap() and segmap_release()).
> > you get a mapping onto the part of the file the user wants to change,
> > do a uiomove() to copy the user's buffer in, and then release the mapping.
> > mappings are cached in an LRU fashion.
> Why copy and not map, mapping eliminates a copy and conserves memory ???

yea, that's another optimization that I forgot to mention.
if the user's write is properly aligned in address, size and file offset,
then we could transfer the page to the vnode and make the user's
page copy-on-write.  this alignment may be greater than page-size
(ie. virtually-addressed cache architectures).

also, it'd be good to see in the typical case whether or not the process
just writes over the page again right away.  yes, this totally depends
on the application, but we have to pick a default somehow.

> > async i/o, readahead, clustering, partial-page stuff, or dynamic
> > 	buffer-cache resizing.  I have ideas, but haven't had time to
> > 	do anything about these yet.  most of this should be fairly
> > 	straightforward (except maybe partial pages).
> If we want to do async IO onto files the 'correct' way, we should
>     wire the region
>     set it COW (for the process only, it was a write else it stays
> 	as it is)
>     create a list of phyiscal pageID's
>     initiate the IO ...
>     at the end of the IO, unwire the page, if it was copied free it
>     if not mapped elsewhere
> This scheme can be used for general async IO. (AIX has such a feature,
> as_att and friends).
> I need that feature in the forseable future, and would implement it 
> if there is consensus hoe to do it. 
> It did it on top of a SVR-V4 vm and its pain if can't change the source.

right, that's for async i/o to user space.  that'll be great to have too.
what I meant was that at this point I've disabled aio in general
since I didn't want to deal with it while I was getting the other parts
working.  once kernel aio is working again, implementing user-space aio
will mostly be a matter of writing that "set it COW" part...
that's a little different from most of the COW variations uvm has already.

it's not clear to me how much of async i/o and related acivities
can be filesystem-independent (currently it's totally within the fs).
I suspect that it'll mostly have to stay within the fs because nfs and
other distributed filesystems will require different behaviour than
local-disk-based filesystems, but it'll be interesting to see
how much we can push it towards filesystem-independence.