NetBSD-Users archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
raidframe: Parity is dirty after every reboot
Good afternoon, everybody!
To avoid data-loss if a drive breaks down (backups are made daily but
the data on the drives just changes to quickly for me to be able to keep
up completely), I have made a raid1 over two partitions. Normally, I
would probably use whole drives for that, but this is an old machine
where the amount of usable drives is rather limited (Sun U60), so this
will have to do. In theory it should work fine. :-) To complicate things
a little, I also encrypted the raid with cgd. Again, in theorie, this
should cause no problems - I think...
What I did is this:
I created to almost identical partitions (a few blocks off). But the
raidframe knows that and truncated to the lower size. The devices I used
are sd0e and sd1e. The output looks like this:
Components:
/dev/sd0e: optimal
/dev/sd1e: optimal
No spares.
Component label for /dev/sd0e:
Row: 0, Column: 0, Num Rows: 1, Num Columns: 2
Version: 2, Serial Number: 20080618, Mod Counter: 63
Clean: No, Status: 0
sectPerSU: 128, SUsPerPU: 1, SUsPerRU: 1
Queue size: 100, blocksize: 512, numBlocks: 102399872
RAID Level: 1
Autoconfig: No
Root partition: No
Last configured as: raid0
Component label for /dev/sd1e:
Row: 0, Column: 1, Num Rows: 1, Num Columns: 2
Version: 2, Serial Number: 20080618, Mod Counter: 63
Clean: No, Status: 0
sectPerSU: 128, SUsPerPU: 1, SUsPerRU: 1
Queue size: 100, blocksize: 512, numBlocks: 102399872
RAID Level: 1
Autoconfig: No
Root partition: No
Last configured as: raid0
Parity status: DIRTY
Reconstruction is 100% complete.
Parity Re-write is 1% complete.
Copyback is 100% complete.
Note the DIRTY at the bottom.
I put all the stuff I have to do into a script that makes sure, the raid
is working, attached the cgd, checks the filesystem and mounts it. There
are checks in the script to stop it if there are errors. Minus these
(and the comments that are shown while the script is running), it boils
down to
raidctl -v -s raid0
cgdconfig -v -p cgd0 /dev/raid0c
fsck -fv -t ffs /dev/cgd0c
mount -o softdep /dev/cgd0c /secret
Note that I always used partition c (i.e. raid0c and cgd0c). If I
understand the partitioning correctly, this should pose no problem
because I am using only one partition on these devices. That is why I
just used the "whole disk". I could move the cgd to raid0a or something,
if that make a difference. But there is a lot of data there to move, so
I'd rather not just do it as an experiment.
After every reboot, the RAID is dirty and starts rebuilding the parity.
This takes forever because of the rather slow connection of the drives
(see other post about this subject).
I am running this:
NetBSD sunny 4.99.66 NetBSD 4.99.66 (SUNNY) #0: Wed Jun 25 12:40:00 CEST
2008
christian@sunny:/usr/build/obj.sparc64/usr/src/sys/arch/sparc64/compile/SUNNY
sparc64
Where did I mess up? Why does the raidframe want to rebuild the parity
after *every* reboot - including the ones that went smoothly without any
crashed and only cleanly unmounted filesystems?
Regards,
Chris
Home |
Main Index |
Thread Index |
Old Index