Subject: Software RAID-0 performance
To: None <netbsd-users@netbsd.org>
From: None <sigsegv@rambler.ru>
List: netbsd-users
Date: 07/03/2004 02:28:20
I've been trying to measure NetBSD software RAID-0 performance on my
system, with 2 hard disks arranged for striping, however the numbers
(bandwidth) don't seem to add up, i.e.
I'm using two Seagate IDE hard disks for RAID-0 /dev/wd0 and /dev/wd1,
each connected to a seperate IDE controller
I run the following two commands concurrently to see how much data is
being read from each drive at the same time
# dd if=/dev/rwd0d of=/dev/null bs=64k &
# dd if=/dev/rwd1d of=/dev/null bs=64k &
Running "systat vmstat 1" gives me the following reading in regard to
data being read:
Disks: seeks xfers bytes %busy 100 pic0 pin 0
64 fmin
fd0
85 ftarg
md0 itarg
wd0 643 40M 97.0
217 wired
wd1 618 39M
96.0 pdfre
wd2 pdscn
raid0
However running
# dd if=/dev/rraid0d of=/dev/null bs=64k
Gives me the following systat reading:
Disks: seeks xfers bytes %busy 99 pic0 pin 0
64 fmin
fd0
85 ftarg
md0 itarg
wd0 606 19M 66.3
205 wired
wd1 606 19M
52.5 pdfre
wd2 pdscn
raid0 606 38M 97.0
My question is, why is there such a huge performance drop on software
RAID-0 (I was expecting it to transfer at around 80M/sec, but it only
manages around 40M/sec) when both hard disks and PCI bus are capable of
higher bandwidth? I have experimented with different sectors per stripe
settings etc, but it doesn't make much difference, in my case, the
optimal value is about 64 sectors. Below is my raid0.conf file.
START array
# numRow numCol numSpare
1 2 0
START disks
/dev/wd0e
/dev/wd1e
START layout
# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level
64 1 1 0
START queue
fifo 100