Subject: Disk scheduling policy (Re: NEW_BUFQ_STRATEGY)
To: None <email@example.com>
From: Thor Lancelot Simon <firstname.lastname@example.org>
Date: 12/01/2003 15:07:32
On Mon, Dec 01, 2003 at 01:35:23PM -0500, Thor Lancelot Simon wrote:
> SGI has an interesting general-purpose policy with two queues, pulling a
> configurable number of requests from each queue in turn, that is described
> in the release notes for a recent version of Irix. It appears to perform
> reasonably well for both interactive and fileserver workloads and might be
> a better default than either our current policy or NEW_BUFQ_STRATEGY. I've
> posted a pointer to it here before -- if anyone wants to implement this but
> can't find the details I'll dig them up again.
From the now-dead URL http://www.sgi.com/developers/feature/2001/roadmap.html:
| Disk sorting in 6.5.8
| Previously, all disk requests were sorted by block
| number. Unfortunately, if the filesystem write activity was more than
| the disk could satisfy, the disk could get swamped with delayed write
| requests. This would result in reads and synchronous writes being delayed
| for extensive periods. In extreme cases, the system would appear to stall
| or would experience NFS timeouts.
| In 6.5.8, the queues are split. Doing this permits queuing delayed writes
| into one queue, while synchronous writes and reads are entered into another
| queue. In 6.5.8 the disk driver will alternate between queues. This ensures
| that a large queue of delayed write requests will not adversely impact
| interactive response. If both delayed writes and other requests are pending,
| the driver will alternate between them, issuing several delayed writes,
| then several of the other requests. Selecting several from each queue each
| time, rather than just one from each queue each time, makes sequential
| I/O faster and disk performance is maximized.
A really interesting examination of a certain type of pathological load
for the traditional BSD elevator sort scheduler can be found at:
It seems to me that softdep, if flushing dependencies, may cause exactly
the kind of "deceptive idleness" the paper discusses.