NetBSD-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

RAID5 (RAIDFrame) performance seems low.



Hi All,

I've had a 4 disc RAID5 (RAIDFrame) array for a couple of years now which
has good read performance, but not great write performance, which I always
put down to not doing my reading beforehand and having an array that wasn't
(n+1) drives, where n is a power of 2, allowing matching FS block size to
stripe size. This array was 4x500G Hitachi 7K1000 consumer level SATA
drives.

Anywhoo, time has passed and it's time to replace the array with something a
little larger, so this time I've gone for 3x2T HGST 7K2000 drives, however
despite hoping for better write performance it's looking sofar to be much
much worse, and I was wondering if anyone can suggest something I may have
missed, or it really should be that slow.

The drives are connected to an AMD 780G motherboard, using AHCI. (same
board/setup as the previous RAID), all drives have write and read caching
enabled.

The raid is setup like so:-

(ian:~)$ cat raid0.conf
# raidctl config file for /dev/rraid0d

START array
# numRow numCol numSpare
1 3 0

START disks
/dev/wd2a
/dev/wd3a
/dev/wd4a

START layout
# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_5
64 1 1 5

START queue
fifo 100

The disks are labeled:-

# /dev/rwd4a:
type: ESDI
disk: Hitachi HDS72202
label: fictitious
flags:
bytes/sector: 512
sectors/track: 63
tracks/cylinder: 16
sectors/cylinder: 1008
cylinders: 3876021
total sectors: 3907029168
rpm: 3600
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0           # microseconds
track-to-track seek: 0  # microseconds
drivedata: 0

4 partitions:
#        size    offset     fstype [fsize bsize cpg/sgs]
 a: 3907029168         0       RAID                     # (Cyl.      0 -
3876020)
 d: 3907029168         0     unused      0     0        # (Cyl.      0 -
3876020)


And configured thusly:-

(ian:~)$ cat doraid.sh
#!/usr/local/bin/bash

label=`date +%Y%m%d%H%M`
echo Erasing raid
raidctl -uv raid0
echo Creating raid
raidctl -C raid0.conf raid0
echo Labeling raid $label
raidctl -I $label raid0
echo Parity rewrite
raidctl -iv raid0
echo Destroying GPT
gpt destroy raid0
echo Creating new GPT
gpt create raid0
echo Adding ufs partition
sizes=`gpt add -tufs raid0 2>&1| grep dkctl | sed
"s/.*addwedge.......\([^<]*\).*/\1/"`
s1=`echo $sizes | cut -f1 -d' '`
s2=`echo $sizes | cut -f2 -d' '`
echo Creating wedges
dkctl raid0d delwedge dk0
echo dkctl raid0d addwedge dk0 $s1 $s2 ffs
dkctl raid0d addwedge dk0 $s1 $s2 ffs
echo Creating filesystem
newfs -O2 -b64k -s -64m dk0


running iostat whilst running a bonnie++ run shows about 3MB/sec write to
each drive, for example:

device  read KB/t    r/s   time     MB/s write KB/t    w/s   time     MB/s
wd2         17.00    104   0.99     1.73      22.66    174   0.99     3.86
wd3         17.00    107   0.76     1.78      22.47    174   0.76     3.82
wd4         17.00    102   1.00     1.69      22.68    171   1.00     3.79
raid0        0.00      0   1.00     0.00      64.00    104   1.00     6.50
dk0          0.00      0   1.00     0.00      64.00    104   1.00     6.50


Running bonnie++ (bonnie++ -s16000M -n10:65536:0:100) I get the following
output:-

RAID (NO softdep, no log)
Version 1.03c       ------Sequential Output------ --Sequential Input-
--Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec
%CP
thejolly.dem 16000M  6669   4  6664   1  5353   1 81150  92 161068  22
170.8   0
                    ------Sequential Create------ --------Random
Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files:max            /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
%CP
     10:65536:0/100    23   0 +++++ +++    29   0    23   0 19997  94
38   0
thejolly.demon.co.uk
,16000M,6669,4,6664,1,5353,1,81150,92,161068,22,170.8,0,10:65536:0/100,23,0,+++++,+++,29,0,23,0,19997,94,38,0

RAID (softdep)
Version 1.03c       ------Sequential Output------ --Sequential Input-
--Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec
%CP
thejolly.dem 16000M  6691   4  6668   1  5360   1 83257  95 161295  22
170.1   0
                    ------Sequential Create------ --------Random
Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files:max            /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
%CP
     10:65536:0/100  5955  82 +++++ +++ +++++ +++  6265  80 +++++ +++ +++++
+++
thejolly.demon.co.uk
,16000M,6691,4,6668,1,5360,1,83257,95,161295,22,170.1,0,10:65536:0/100,5955,82,+++++,+++,+++++,+++,6265,80,+++++,+++,+++++,+++

RAID (WAPBL)
Version 1.03c       ------Sequential Output------ --Sequential Input-
--Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec
%CP
thejolly.dem 16000M  6575   4  4002   1  3124   0 84728  96 156595  22
160.0   0
                    ------Sequential Create------ --------Random
Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files:max            /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
%CP
     10:65536:0/100  8381  96 +++++ +++ +++++ +++  8766  99 +++++ +++
1922   3
thejolly.demon.co.uk
,16000M,6575,4,4002,1,3124,0,84728,96,156595,22,160.0,0,10:65536:0/100,8381,96,+++++,+++,+++++,+++,8766,99,+++++,+++,1922,3

This is running the netbsd-5 branch from CVS, built on the 7th, on an AMD 64
quad phenom box with 8G of memory. CPU usage during bonnie++ run is
negligable (2% or so on one core.)

One thing I didn't test was the raw write to the drive, I can check this but
if possible I'd like to avoid another 6-7 hour parity rebuild.

Oh, also, I did try with a SectperSU setting of 128, this seemed to offerer
similar performance.

Have I just missed something stupid, or am I just expecting too much? A
friend running linux seems to get much better performance (on a different
motherboard) from a RAID5 setup using the exact same make and model of disc.

The other 2 drives in the system seem okay (both HGST), running bonnie++ on
the root drive, a 500G HGST on the same controller, with the same args:-

Version 1.03c       ------Sequential Output------ --Sequential Input-
--Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec
%CP
thejolly.dem 16000M 74276  48 77049  19 29356   6 71616  82 81768  11
145.0   0
                    ------Sequential Create------ --------Random
Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files:max            /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
%CP
     10:65536:0/100  1413  18 +++++ +++ +++++ +++  3872  50 +++++ +++ +++++
+++
thejolly.demon.co.uk
,16000M,74276,48,77049,19,29356,6,71616,82,81768,11,145.0,0,10:65536:0/100,1413,18,+++++,+++,+++++,+++,3872,50,+++++,+++,+++++,+++


Cheers,

Ian


Home | Main Index | Thread Index | Old Index