Subject: Re: IO Congestion Control
To: Steven M. Bellovin <smb@cs.columbia.edu>
From: Thor Lancelot Simon <tls@rek.tjls.com>
List: tech-kern
Date: 09/12/2006 10:51:02
On Tue, Sep 12, 2006 at 09:46:24AM -0400, Steven M. Bellovin wrote:
> 
> It's not clear to me that taking the average is worthwhile -- I suspect
> that at times that it matters, the statistical variant I suggested will
> work.  But basically, we all agree -- don't worry about the underlying
> device or the file system layout properties; just look for writes that are
> starting to take longer than they "should" when things are busy.

Yes, but this can tend to penalize those process whose writes take longer
than they "should" because of some _other_ process' pathological behavior.

This is why I suggested that what you want to do is penalize those processes
for whom writes that "take longer than they should" are the preponderance of
writes -- or, considered differently, to penalize processes according to the
probability that their writes occur when the average latency across all
writers (which is an easier statistic to measure anyway) happens to be
increasing.  This nails the pathological "scatter write" patterns with a
penalty while minimizing the penalty to the writers who they disrupt, I
think.

Thor