tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Raidframe and disk strategy
hello. Having just spent some time digging around in all this stuff,
and running some real work bench marks, here's what I observed and found to
be the case.
Raidframe does use the built-in queueing infrstructure to receive its
requests from the upper layers. By default, it sets itself up to use the
fcfs queuing strategy and has done so since as far bak as I checked source,
(I checked NetBSD-2.0, but didn't go any earlier). Raidframe does not
modify the queuing strategy of the component disks below itself. For all
the disk types I checked, wd, sd and ld, the drivers use the system default
queuing strategy. For NetBSD-4 and earlier, the default strategy seems to
be disksort. For NetBSD-5 and later, it's priocscan. What I noticed on
one of my systems with a raid1 raid set running NetBSD-5.1, was that while
the number of disk requests per second was constant, indicating to me that
the subsystems were running flat out, the actual amount of data transfered
in a second was wildly variable. This made me think the disks were spending
a lot of time seeking back and forth -- not an unreasonable thing to be
doing given that the work load consisted of a lot of processes reading
unrelated files throughout the filesystem. However, what made me begin
looking into the issue was that if there was a single process running, it
could read pretty fast, but it didn't take many processes to get the
system thrashing all over the place. That is when I discovered the
discrepancy between the disk strategy raidframe used versus the underlying
strategies for the components of the raid set. My reasoning was that if
raidframe was taking requests in fcfs order, and the disks were handling
them in priocscan order, seeking would be maximized in normal operations
because raidframe doesn't send that many requests to the underlying
components before it stops to wait for them to complete. Raidframe is
taking requests in fcfs order, handing the ones it knows about to the disks
in either the order it got them or a slightly sorted list, and waiting for
the requests to be completed. I wondered what would happen if I set
raidframe to use the priocscan strategy for getting its requests, and if it
would increase the actual data throughput for each read and write interupt
the disks took.
Initial results are quite encouraging. With a raid5 set on top of a
stack of 8 ld disks, I'm seeing between 14% and 16% more throughput on
data from the network to the disk with the strategy set to priocscan at
both the raidframe and disk layers. I don't yet have data, but it looks to
be even better on raid1 sets.
The patch I provided last night gives the user the ability to
experiment with trying different queuing strategies on the fly with their
raid sets. I haven't run the numbers, and I'm not sure if I'll have the
time, but I'm guessing that the optimal solution will be for the underlying
disk components to use the same queuing strategy as the raidframe disk. an
initial improvement on my pach of last night might be to have raidframe
initialize itself to use the system default queueing strategy rather than
bull-headedly picking fcfs in all cases. This would cause default
installations to get the optimal setting without having to do any work at
all.
More thoughts? Has anyone tried the patch?
-Brian
Home |
Main Index |
Thread Index |
Old Index