NetBSD-Users archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Why is a wedge on RAID 10x slower?
I've a RAIDframe RAID-5 on 3x WD40ERFX drives. There's a well-aligned gpt
and wedge on each:
# gpt show wd0
start size index contents
0 1 PMBR
1 1 Pri GPT header
2 32 Pri GPT table
34 30
64 7814037071 1 GPT part - NetBSD RAIDFrame component
7814037135 32 Sec GPT table
7814037167 1 Sec GPT header
# raidctl -G raid0
# raidctl config file for /dev/rraid0d
START array
# numRow numCol numSpare
1 3 0
START disks
/dev/dk0
/dev/dk1
/dev/dk2
START layout
# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_5
64 1 1 5
START queue
fifo 100
Underlying write performance on the raw RAID device is great:
# dd if=/dev/zero of=/dev/rraid0d bs=1m
^C19476+0 records in
19475+0 records out
20421017600 bytes transferred in 67.089 secs (304386972 bytes/sec)
If I create a similarly well-aligned GPT and wedge on the RAID device,
write performance is horrible:
# gpt show raid0
start size index contents
0 1 PMBR
1 1 Pri GPT header
2 32 Pri GPT table
34 30
64 15628073887 1 GPT part - NetBSD FFSv1/FFSv2
15628073951 32 Sec GPT table
15628073983 1 Sec GPT header
backup 50# dkctl raid0 listwedges
/dev/rraid0d: 1 wedge:
dk3: bigdata, 15628073887 blocks at 64, type: ffs
backup 51# dd if=/dev/zero of=/dev/rdk3 bs=1m
^C388+0 records in
387+0 records out
405798912 bytes transferred in 24.320 secs (16685810 bytes/sec)
I don't understand the disparity in performance.
--
Stephen
Home |
Main Index |
Thread Index |
Old Index