NetBSD-Users archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: RAID5 (RAIDFrame) performance seems low.
On Fri, 9 Apr 2010 18:18:24 +0100
Ian Clark <mrrooster%gmail.com@localhost> wrote:
> Hi All,
>
> I've had a 4 disc RAID5 (RAIDFrame) array for a couple of years now
> which has good read performance, but not great write performance,
> which I always put down to not doing my reading beforehand and having
> an array that wasn't (n+1) drives, where n is a power of 2, allowing
> matching FS block size to stripe size. This array was 4x500G Hitachi
> 7K1000 consumer level SATA drives.
>
> Anywhoo, time has passed and it's time to replace the array with
> something a little larger, so this time I've gone for 3x2T HGST
> 7K2000 drives, however despite hoping for better write performance
> it's looking sofar to be much much worse, and I was wondering if
> anyone can suggest something I may have missed, or it really should
> be that slow.
>
> The drives are connected to an AMD 780G motherboard, using AHCI. (same
> board/setup as the previous RAID), all drives have write and read
> caching enabled.
>
> The raid is setup like so:-
>
> (ian:~)$ cat raid0.conf
> # raidctl config file for /dev/rraid0d
[snip]
> The disks are labeled:-
[snip]
> And configured thusly:-
>
> (ian:~)$ cat doraid.sh
> #!/usr/local/bin/bash
>
> label=`date +%Y%m%d%H%M`
> echo Erasing raid
> raidctl -uv raid0
> echo Creating raid
> raidctl -C raid0.conf raid0
> echo Labeling raid $label
> raidctl -I $label raid0
> echo Parity rewrite
> raidctl -iv raid0
> echo Destroying GPT
> gpt destroy raid0
> echo Creating new GPT
> gpt create raid0
> echo Adding ufs partition
> sizes=`gpt add -tufs raid0 2>&1| grep dkctl | sed
> "s/.*addwedge.......\([^<]*\).*/\1/"`
> s1=`echo $sizes | cut -f1 -d' '`
> s2=`echo $sizes | cut -f2 -d' '`
> echo Creating wedges
> dkctl raid0d delwedge dk0
> echo dkctl raid0d addwedge dk0 $s1 $s2 ffs
> dkctl raid0d addwedge dk0 $s1 $s2 ffs
> echo Creating filesystem
> newfs -O2 -b64k -s -64m dk0
what will $s1 be in the above? Is it going to be strip-aligned with
the underlying RAID set? If not, then every stripe-sized write will be
touching two stripes, and doing two "small writes", which would result
in abysmal performance.
> running iostat whilst running a bonnie++ run shows about 3MB/sec
> write to each drive, for example:
>
> device read KB/t r/s time MB/s write KB/t w/s time
> MB/s wd2 17.00 104 0.99 1.73 22.66 174
> 0.99 3.86 wd3 17.00 107 0.76 1.78 22.47
> 174 0.76 3.82 wd4 17.00 102 1.00 1.69
> 22.68 171 1.00 3.79 raid0 0.00 0 1.00
> 0.00 64.00 104 1.00 6.50 dk0 0.00 0
> 1.00 0.00 64.00 104 1.00 6.50
That's way too slow for these calibre of drives....
> Running bonnie++ (bonnie++ -s16000M -n10:65536:0:100) I get the
> following output:-
[snip of more sub-par results]
> This is running the netbsd-5 branch from CVS, built on the 7th, on an
> AMD 64 quad phenom box with 8G of memory. CPU usage during bonnie++
> run is negligable (2% or so on one core.)
>
> One thing I didn't test was the raw write to the drive, I can check
> this but if possible I'd like to avoid another 6-7 hour parity
> rebuild.
You can ignore the parity building when you're doing this sort of
testing... the parity building only really matters if you want to have
valid data in the event that a disk dies. If you don't care about
that, then you don't need to run the parity updates before doing the
benchmarking.
> Oh, also, I did try with a SectperSU setting of 128, this seemed to
> offerer similar performance.
>
> Have I just missed something stupid, or am I just expecting too much?
> A friend running linux seems to get much better performance (on a
> different motherboard) from a RAID5 setup using the exact same make
> and model of disc.
>
> The other 2 drives in the system seem okay (both HGST), running
> bonnie++ on the root drive, a 500G HGST on the same controller, with
> the same args:-
>
> Version 1.03c ------Sequential Output------ --Sequential Input-
> --Random-
> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec
> %CP /sec %CP
> thejolly.dem 16000M 74276 48 77049 19 29356 6 71616 82 81768 11
> 145.0 0
> ------Sequential Create------ --------Random
> Create--------
> -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
> files:max /sec %CP /sec %CP /sec %CP /sec %CP /sec
> %CP /sec %CP
> 10:65536:0/100 1413 18 +++++ +++ +++++ +++ 3872 50 +++++ +++
> +++++ +++
> thejolly.demon.co.uk
> ,16000M,74276,48,77049,19,29356,6,71616,82,81768,11,145.0,0,10:65536:0/100,1413,18,+++++,+++,+++++,+++,3872,50,+++++,+++,+++++,+++
Ya.... I suspect an alignment issue with the underlying RAID set.
Later...
Greg Oster
Home |
Main Index |
Thread Index |
Old Index