NetBSD-Users archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
setting up raid5
I'm trying to set up a raid5 configuration across 3 disks. It isn't working
right...
I used this as raid1.conf:
START array
1 3 0
START disks
/dev/dk0
/dev/dk1
/dev/dk2
START layout
512 1 1 5
START queue
fifo 100
which is mostly copied from the raidctl man page. I haven't figured
out how to hard-wire particular gpt partitions to rational wedge numbers,
but I think that that's a separate problem. Here's what the wedges look
like:
# dkctl wd2 listwedges
/dev/rwd2d: 1 wedge:
dk0: 78c2265a-6c5b-11df-9801-000e0cc4b8a3, 3907029101 blocks at 34, type:
raidframe
# dkctl wd3 listwedges
/dev/rwd3d: 1 wedge:
dk1: 78cb8dee-6c5b-11df-985a-000e0cc4b8a3, 3907029101 blocks at 34, type:
raidframe
# dkctl wd4 listwedges
/dev/rwd4d: 1 wedge:
dk2: 78d62b64-6c5b-11df-98b3-000e0cc4b8a3, 3907029101 blocks at 34, type:
raidframe
I then try to initialize the RAIDframe set:
# raidctl -v -C /tmp/raid1.conf raid1
# raidctl -v -I 2010053003 raid1
# raidctl -v -i raid1
Initiating re-write of parity
Parity Re-write status:
# raidctl -v -s raid1
Components:
/dev/dk0: optimal
/dev/dk1: optimal
/dev/dk2: failed
No spares.
Component label for /dev/dk0:
Row: 0, Column: 0, Num Rows: 1, Num Columns: 3
Version: 2, Serial Number: 2010053003, Mod Counter: 74
Clean: No, Status: 0
sectPerSU: 512, SUsPerPU: 1, SUsPerRU: 1
Queue size: 100, blocksize: 512, numBlocks: 3907028992
RAID Level: 5
Autoconfig: No
Root partition: No
Last configured as: raid1
Component label for /dev/dk1:
Row: 0, Column: 1, Num Rows: 1, Num Columns: 3
Version: 2, Serial Number: 2010053003, Mod Counter: 74
Clean: No, Status: 0
sectPerSU: 512, SUsPerPU: 1, SUsPerRU: 1
Queue size: 100, blocksize: 512, numBlocks: 3907028992
RAID Level: 5
Autoconfig: No
Root partition: No
Last configured as: raid1
/dev/dk2 status is: failed. Skipping label.
Parity status: clean
Reconstruction is 100% complete.
Parity Re-write is 100% complete.
Copyback is 100% complete.raid1: Component /dev/dk0 being configured at col: 0
Column: 0 Num Columns: 0
Version: 0 Serial Number: 0 Mod Counter: 0
Clean: No Status: 0
Number of columns do not match for: /dev/dk0
/dev/dk0 is not clean!
raid1: Component /dev/dk1 being configured at col: 1
Column: 0 Num Columns: 0
Version: 0 Serial Number: 0 Mod Counter: 0
Clean: No Status: 0
Column out of alignment for: /dev/dk1
Number of columns do not match for: /dev/dk1
/dev/dk1 is not clean!
raid1: Component /dev/dk2 being configured at col: 2
Column: 0 Num Columns: 0
Version: 0 Serial Number: 0 Mod Counter: 0
Clean: No Status: 0
Column out of alignment for: /dev/dk2
Number of columns do not match for: /dev/dk2
/dev/dk2 is not clean!
raid1: There were fatal errors
raid1: Fatal errors being ignored.
raid1: RAID Level 5
raid1: Components: /dev/dk0 /dev/dk1 /dev/dk2
raid1: Total Sectors: 7814057984 (3815458 MB)
raid1: WARNING: raid1: total sector size in disklabel (3519090688) != the size
of raid (7814057984)
raid1: WARNING: raid1: total sector size in disklabel (3519090688) != the size
of raid (7814057984)
raid1: IO Error. Marking /dev/dk2 as failed.
Unable to verify parity: can't read the stripe
Could not verify parity
raid1: Error re-writing parity!
raid1: WARNING: raid1: total sector size in disklabel (3519090688) != the size
of raid (7814057984)
It just complains that about some I/O error, with no details
(and no drive-level errors), nor can I figure out how to make
it resume using dk2. I see the "Number of columns do not match"
and "Column out of alignment", but those are both present for
dk1 and dk2, and I'm not getting any I/O errors on dk1. (I'm
cranking down the sectors per stripe to 64, but that doesn't seem
likely to be the error.)
I assume that my raid1.conf file is wrong; clues welcome.
--Steve Bellovin, http://www.cs.columbia.edu/~smb
Home |
Main Index |
Thread Index |
Old Index