[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Beating a dead horse
Date: Wed, 25 Nov 2015 19:08:59 -0553.75
From: "William A. Mahaffey III" <wam%hiwaay.net@localhost>
| The other command is still running, will write out 320 GB by my count,
| is that as intended, or a typo :-) ? If as wanted, I will leave it going
| & report back when it is done.
Kill it, those tests are testing precisely nothing.
If you want to try a slighly better test, try with bs=32k, so you are at
least having dd write file system sized blocks. Won't help with raidframe
overheads, but at least you'll optimise the filesystem overheads as much
But I think now that it is clear that you could improve performance if you
rebuild the filesystem with -b 65536 (and -f one of 8192 16384 32768, take
your pick... 8192 would be most space saving).
The question is whether the improvement is really needed for real work,
as opposed to meaningless benchmarks (which dd really isn't anyway, if
you want a real benchmark pick something, perhaps bonnie, from the
benchmarks category in pkgsrc). And whether it will be enough to be
worth the pain - unfortunately, there's no real way to know how much
improvement you'd get without doing the work first.
Only you can answer those questions - you know what the work is, and
how effectively your system is coping as it is now configured.
As I said before, when it is me, I just ignore all of this, I don't care
what the i/o throughput is (or could be) because in practice there just
isn't enough i/o (on raid5 based filesystems) in my system to matter.
So I optimise for other things - of which the most valuable (to me) is
Main Index |
Thread Index |