Subject: Re: Bad response...
To: None <current-users@netbsd.org>
From: Michael van Elst <mlelstv@serpens.de>
List: current-users
Date: 08/30/2004 22:49:53
smb@research.att.com ("Steven M. Bellovin") writes:

>follow a strategy similar to TCP's congestion control -- when there are 
>competing requests, it needs to *seriously* cut back the percentage 
>allocatable to such needs -- say, an additive increase/multiplicative 
>decrease scheme, just like TCP uses.

Wouldn't a simple constant limit to the number of dirty pages
in the filecache be sufficient ? This will put a limit on the
time needed to reuse those pages.

Currently a simple untar of the source tree will fill the filecache
(default ~50% of main memory) and the system is kept busy for a
few ten seconds to flush the data to disk.

Of course this will delay the tar process, but it is unlikely that
you could do anything if you have to wait for free memory.

A limit of 1 second (~ 10Mbyte on a modern disk) should be sufficient
to get good interactive behaviour.

-- 
-- 
                                Michael van Elst
Internet: mlelstv@serpens.de
                                "A potential Snark may lurk in every tree."