Subject: Re: IO Congestion Control
To: None <tech-kern@NetBSD.org>
From: Alan Barrett <firstname.lastname@example.org>
Date: 09/11/2006 21:06:20
On Mon, 11 Sep 2006, Sumantra Kundu wrote:
> Taking cue from the above observation, we now intend to implement a
> congestion control algorithm (uvm_cca) inside the uvm. However,
> instead of observing process behaviour, we would now intend to "infer
> congestion" by observing the dynamics of dirty pages, w.r.t to a
> specific IO device.
> Since no two IO devices are the same, this implies we need to have a
> mechanism that is able to capture and understand the "capabilities",
> "limitations", and "performance" of such a device at run time and make
> such performance figures available to the UVM, before any sort of
> device directed IO throttling could be initiated. To top it, writes
> need not be of the same cost and can generally be thought of a
> function of the disk seek time.
This sounds awfully complicated. I think Thor is right: measure
things like the amount of data in flight and the time to service each
request; feed those into an algorithm a lot like TCP to get a limit(per
process/device pair) on the rate of new requests. If this works, you
don't need to model the device's seek time or data transfer rate, you
just need to measure the number of outstanding requests and the time to
service the requests.
Your ideas about tradeoffs between seek time and raw throughput seem
useful, but I suspect that they belong in a per-device queueing layer
(deciding which of many in-flight requests to service first) rather than
in the realm of deciding which processes to throttle.
--apb (Alan Barrett)