Subject: Re: UBC status
To: None <email@example.com>
From: Chuck Silvers <firstname.lastname@example.org>
Date: 10/02/1999 10:31:52
On Fri, Oct 01, 1999 at 09:22:47AM -0700, Eduardo E. Horvath wrote:
> On Thu, 30 Sep 1999, Chuck Silvers wrote:
> > a page being on the inactive list implies that it has no pmap mappings,
> > which isn't really want you want for cached regular file data. or by
> > "buffer-cache pages" do you mean just metadata, which is all that will
> > be in the buffer cache in the post-UBC world?
> I was mostly thinking about the data pages. What would those pages be
> mapped into? The way file I/Os currently work is that a page is read into
> the buffer cache and then the requested section of data (probably not page
> aligned) is copied in/out of the requesting process' data space. Has that
> changed with UBC?
yes. with UBC, regular file data is not stored in the buffer cache at all.
rather it is stored in the "page cache", which refers to memory managed
by the vm system rather than by the vfs_bio routines. some important points
to note are:
1. buffer cache pages are always mapped and always wired, whereas page cache
pages are usually not wired and do not have to be mapped to retain their
2. buffer cache pages are grouped into "struct buf"s and managed together,
whereas page cache pages are managed individually as "struct vm_page"s.
3. the buffer cache and the page cache do not compete for memory at all,
ie. a page belonging to the buffer cache will never be stolen for use
in the page cache, and vice versa.
there are no doubt more important bits but that's all I can think of right now.
note that mmap() file mappings (of which executable images are an especially
interesting kind) need for their pages to remain mapped long enough for
the application to access the data, and preferably until the application
is entirely done accessing that page, which may be a long long time in the
> I would expect you still want to do the copy and map both the destination
> and target into the kernel for simplicity. But once the copy is complete
> there is no reason to leave the page mapped in. In fact, it would be best
> to unmap it immediately and put it on the inactive list at that point.
> Otherwise you rely on the page scanner, which will not distinguish between
> buffer cache pages and process pages.
why would we want to reuse a vnode page that was just accessed before an
anonymous page which happens to still be on the active list but hasn't
been accessed for hours? that's an implication of deactivating vnode
> > furthermore, the inactive list wants to contain at most 1/3 of RAM,
> > but we'd like to allow using nearly all of RAM for cached file data
> > in the case where the demand for other types of pages is low.
> You want "to allow using nearly all of RAM for cached file data" only in
> the case where there is no other good use for that RAM. The problem seen
right, but how do we determine whether or not there is a better use
for the memory?
> in the past with unified buffer caches is that heavy use of the buffer
> cache will cause sleeping processes to be paged or swapped out. Then
> these processes take a long time to fault their working sets back in.
> This causes significant performance problems, especially on interactive
> systems. Consider what happens if you run a `find' command to hunt for
> some file on the machine which causes your browser and your X server and
> your window manager get paged out and replaced with filesystem data. You
> really don't want this to happen.
yes, I understand this problem. consider the flip side of this:
you want to repeatedly access the same file data in application that doesn't
use much anonymous memory, such as grep'ing for a string in a large file.
if possible you'd like the entire file to be cached in memory, otherwise
each time you read thru the file it will have to be read from disk.
the goal is to find a way to accomodate both of these behaviours.