Subject: Re: I/O priorities
To: None <tech-kern@netbsd.org>
From: David Laight <david@l8s.co.uk>
List: tech-kern
Date: 06/21/2002 21:43:51
On Fri, Jun 21, 2002 at 12:56:18PM -0700, Jason R Thorpe wrote:
> On Fri, Jun 21, 2002 at 03:44:37PM -0400, Gary Thorpe wrote:
> 
>  > Then why doesn't 1.5.2 exhibit the same problem???
> 
> Because pre-UBC systems won't ever have as much outstanding I/O
> because the buffer cache is a fixed size (and also much smaller).

and probably because programs reading/writing large files will
only displace file data from memory - not program code and data.

One thing that might be happening is that pages are (probably)
invalidated on a 'least recently used' basis.  Now if a single
page of your X server (say) gets paged out/discarded, when it
is wanted again it is quite likely that a different page of the
X server will be discared to make room for it.
It is thus possible that instead of keeping most of the (oversised)
working set in memory and making a few infrequently used pages
'swap' with each other, that all pages get invalidated in turn.

After all, if there is a long queue for disk io then the working
set of a process waiting for a page-in will quicky become the
'least recently used' pages.

Unfortunately I don't have a system I'm willing to fiddle with
the VM (or filesystem) code on :-(

	David

-- 
David Laight: david@l8s.co.uk