Subject: Re: Unified Buffer Cache 1st snapshot
To: None <email@example.com>
From: Stefan Grefen <firstname.lastname@example.org>
Date: 09/22/1998 10:47:06
In message <19980921202733.A22062@panix.com> Thor Lancelot Simon wrote:
> On Mon, Sep 21, 1998 at 11:25:33AM +0200, Stefan Grefen wrote:
> > In message <email@example.com> Chuck Silvers wrote:
> > > hi folks,
> > >
> > [...]
> > > the interfaces for the kernel to access the page cache are ubc_alloc()
> > > and ubc_release() (ala segmap_getmap() and segmap_release()).
> > > you get a mapping onto the part of the file the user wants to change,
> > > do a uiomove() to copy the user's buffer in, and then release the mapping.
> > > mappings are cached in an LRU fashion.
> > Why copy and not map, mapping eliminates a copy and conserves memory ???
> Mapping is only a win _sometimes_, on _some processors_. Not all MMUs are
> "fast"; many I/O's are "too small".
> Heck, why not just have uiomove() map instead of copy? Clearly it's not
> always appropriate.
In this case you have to pagein all pages anyway (either for copying or
giving the page to the buffer cache) and you have to have all those
pages wired (either the memory from the buffer cache or the page lend to
the buffer cache).
So we're talking about minimal overhead in the vm-system and no additional
On most architectures the COW should be cheap compared to copy (if the page
is not touched by the process) and for sanity reasons you do that only
for aligned writes with max(pagesize,fs-blocksize).
This eliminates all those cases where the MMU operation would be to expensive.
If for a given port this still is to expensive, than do the copy always.
A port specific macro IO_MAP_OR_COPY(addr, size) can determine the
best way to do it.
The real challange is the unwire/uncow in io-completion ...
Stefan Grefen Tandem Computers Europe Inc.
firstname.lastname@example.org High Performance Research Center
--- Hacking's just another word for nothing left to kludge. ---