Subject: Re: buffer priority [Re: unified buffers and responsibility]
To: Manuel Bouyer <bouyer@antioche.lip6.fr>
From: Milos Urbanek <urbanek@openbsd.cz>
List: tech-kern
Date: 06/13/2002 17:22:59
On Thu, Jun 13, 2002 at 04:52:21PM +0200, Manuel Bouyer wrote:
> On Thu, Jun 13, 2002 at 05:38:24AM +0000, Wojciech Puchar wrote:
> > >
> > > Now I don't have much idea on what algorithm to use, neither
> > > how to implement it. Probably something like the process scheduler, but
> > > for I/O, processes doing a lot of I/O having their I/O priority lowered.
> > 
> > that would be nice but another idea:
> > 
> > 1)add something like page priority (if it doesn't already exist). lower
> > priority are freed first.
> 
> The pagedaemon proabbly already does this

I have looked through the sources in /sys/uvm and I think page daemon only
handles priorities in the terms of free/inactive/active pages. 
There is no special handling wheather the page belongs to the uobj object 
of the running program (e.g. the text section/data section) or if it just
belongs to the regular file like that copied by 'cp'. There are only
simple limits (set by sysctl) that tell the page daemon wheather he should
reactivate anon/file/exec pages.

Should not be those prioritized? E.g. prefer to swap out rather those
inactive pages that belong to the regular file' buffer rather than those
that belong to the executables or to the anon objects of the runing programs? 

I think that is the part of the code that needs tuning.

Milos

> 
> > 
> > 2) keep I/O priority for each FILE HANDLE, set it to maximum at file
> > open/create
> > 
> > 3) lseek should reset it to maximum
> > 
> > 4) every read/write should lower it proportionally to request size, unless
> > it's already lowest possible.
> > 
> > 
> > 
> > this way we would get random accessed large files (like databases) and
> > lots of small files well cached, while huge linearly accessed files
> > uncached.
> 
> A tar xvf can actually cause the same problem as a single write of large file.
> 
> > 
> > 
> > priority should be counted by file handle, not process ID as same process
> > could use both random access for one thing and linear for another (like
> > sendmail with it's qf and df files).
> 
> The problem is not linear vs random, it's the I/O load, and the size of the
> request queue generated.
> 
> --
> Manuel Bouyer, LIP6, Universite Paris VI.           Manuel.Bouyer@lip6.fr
> --
> 

--