Subject: Re: Throttling IO Requests in NetBSD via Congestion Control
To: Bill Studenmund <email@example.com>
From: Matt Thomas <firstname.lastname@example.org>
Date: 08/21/2006 16:21:27
Bill Studenmund wrote:
> On Mon, Aug 21, 2006 at 04:46:50PM -0400, Thor Lancelot Simon wrote:
>> On Mon, Aug 21, 2006 at 01:35:03PM -0700, Bill Studenmund wrote:
>>> The current scheme just stops a process, so we basically pump the breaks
>>> on a writer. If a process stops writing (say it goes back to processing
>>> data to write to another file), we stop hitting the breaks. With tweaking
>>> the scheduling, we would be applying a more-gradual breaking. I'm not 100%
>>> sure how to do this as I _think_ I want whatever we do to decay; if a
>>> program shifts away from writing, I'd like it to move back to being
>>> scheduled as if it had never written. I know the scheduler does this, I'm
>>> just not sure how to map the dynamics from disk usage to those for CPU
>> Here is what bothers me -- and it's clear to me now that I did not adequately
>> understand one key design decision early in this process. I do not believe
>> that it is _ever_ appropriate to throttle a writer simply because the
>> number of writes it is issuing exceeds _X_, for any _X_, without some metric
>> of whether a congestion condition has actually occurred.
>> I cannot imagine how, in general, doing that could actually have any other
>> than a negative performance impact.
> If our congestion prediction model is accurate, then we can predict
> congestion before we encounter it. Thus for a correct model, I believe
> that there is _a_ value for _X_ that will work well.
I disagree. X will vary on the "quality" of writes. If you have lots of
sequential writes, the amount of them you can do will be higher than if you
lots of random writes. So the threshold will vary.