Subject: Re: CCD performance tuning
To: Uwe Lienig <Uwe.Lienig@fif.mw.htw-dresden.de>
From: Aaron J. Grier <agrier@poofygoof.com>
List: netbsd-users
Date: 01/22/2002 11:41:43
On Tue, Jan 22, 2002 at 06:02:21PM +0100, Uwe Lienig wrote:
> I wanted to know how the interleave will affect real, sys and user
> time. OTOH this was a simple burn in for the disks without the
> necessity of running bonnie. May be it was not a very clever approach,
> but this one runs without any additional tools.
it is a valid (albeit limited) test, I believe. I don't think there's
anything wrong with your methodologies.
> Reading ccd(4) one would conclude, that the interleave should be the
> size of a track. That is somewhat funny since SCSI disks ( and their
> other counterparts e.g. (E)IDE ) don't have a fixed track size. And
> even the involved RZ58 has 85 sectors per track. But setting
> interleave to that value does not yield to the lowest real execution
> time.
I have heard that fitting interleave to the disk cache is more
effective. anecdotal evidence with raidframe seems to support this
hypothesis. my disks were two ancient 2G micropolis 1924s, mirrored
with RAIDframe on a 5000/240. I also tested this with five connor 1G
drives in a RAID5 configuration. as soon as the interleave size
exceeded disk cache, performance decreased. I assume ccd would work
similarly.
other variables which can significantly affect performance are the
disklabel and newfs parameters. at least under RAIDframe, the default
disklabel parameters are reportedly "sucky." :)
http://www.netbsd.org/cgi-bin/query-pr-single.pl?number=11989
it's quite possible that ccd suffers from the same problems.
I never was able to get throughput for my RAID arrays to exceed the
performance of a single component device. (RAID did increase seeks /
second slightly.)
> Another question evolves, if the user and sys times are compared.
> Sometimes the real time is low. Nevertheless sys time is much higher
> than in other cases. Changing interleave from 448 to 456 increases sys
> time by roughly 30%.
I am not sure how to interpret this. perhaps someone here who is more
familiar with NetBSD I/O internals can explain better?
> May be overall performance is better measured by bonnie and the like.
> But will this nifty shell script give some reasonable advise as well?
it is only a limited test, but is far better than none at all! :)
> But I see, that I should use those tools as well. But this will impose
> a long test. To avoid a very long test it seems to me that dropping
> some interleave numbers would be a good point to shorten further test.
perhaps check every n-th interleave value? I would also perform dd to
the disk device (/dev/ccd1c) to eliminate newfs parameter interactions.
once you find "hot spots" I would then test disklabel parameters while
keeping interleave constant, again to the raw device.
--
Aaron J. Grier | "Not your ordinary poofy goof." | agrier@poofygoof.com
"Making people dance so hard their pants almost fall
off is kind of fun." -- David Evans