Subject: Re: buffer priority [Re: unified buffers and responsibility]
To: Wojciech Puchar <wojtek@chylonia.3miasto.net>
From: Manuel Bouyer <bouyer@antioche.lip6.fr>
List: tech-kern
Date: 06/13/2002 16:52:21
On Thu, Jun 13, 2002 at 05:38:24AM +0000, Wojciech Puchar wrote:
> >
> > Now I don't have much idea on what algorithm to use, neither
> > how to implement it. Probably something like the process scheduler, but
> > for I/O, processes doing a lot of I/O having their I/O priority lowered.
> 
> that would be nice but another idea:
> 
> 1)add something like page priority (if it doesn't already exist). lower
> priority are freed first.

The pagedaemon proabbly already does this

> 
> 2) keep I/O priority for each FILE HANDLE, set it to maximum at file
> open/create
> 
> 3) lseek should reset it to maximum
> 
> 4) every read/write should lower it proportionally to request size, unless
> it's already lowest possible.
> 
> 
> 
> this way we would get random accessed large files (like databases) and
> lots of small files well cached, while huge linearly accessed files
> uncached.

A tar xvf can actually cause the same problem as a single write of large file.

> 
> 
> priority should be counted by file handle, not process ID as same process
> could use both random access for one thing and linear for another (like
> sendmail with it's qf and df files).

The problem is not linear vs random, it's the I/O load, and the size of the
request queue generated.

--
Manuel Bouyer, LIP6, Universite Paris VI.           Manuel.Bouyer@lip6.fr
--