Subject: Re: Bad response...
To: Michael van Elst <firstname.lastname@example.org>
From: Eric Haszlakiewicz <email@example.com>
Date: 08/30/2004 18:23:32
On Mon, Aug 30, 2004 at 10:49:53PM +0000, Michael van Elst wrote:
> firstname.lastname@example.org ("Steven M. Bellovin") writes:
> >follow a strategy similar to TCP's congestion control -- when there are
> >competing requests, it needs to *seriously* cut back the percentage
> >allocatable to such needs -- say, an additive increase/multiplicative
> >decrease scheme, just like TCP uses.
> Wouldn't a simple constant limit to the number of dirty pages
> in the filecache be sufficient ? This will put a limit on the
> time needed to reuse those pages.
> Currently a simple untar of the source tree will fill the filecache
> (default ~50% of main memory) and the system is kept busy for a
> few ten seconds to flush the data to disk.
It _isn't_ a problem for the system to be writing stuff out to disk
while your interactive program is running. Interactive response of a
_in-memory_ program will be more dependant on CPU availability, which I/O
operations shouldn't take a whole lot of.
The problem is that the system had previously written out pages of
that interactive program because the LRU algorithm thought they were
unused and that it had a whole bunch of higher priority file data to
A solution would be to give those interactive program memory pages
a higher prority than the file pages so the file pages get considered
flushable first. (how to do this? I have no idea. :) )
The existing solution, as mentioned earlier in this thread, is to
set execmin to a higher value. That makes sense if you think of the
problem in terms of keeping what you want around, instead of preventing
what you don't want.