Subject: Re: RAIDFRAME questions
To: None <kilbi@rad.rwth-aachen.de>
From: Greg Oster <oster@cs.usask.ca>
List: current-users
Date: 01/04/2000 15:41:55
Markus Kilbinger writes:
> Hi!
>
> With NetBSD-1.4.2_ALPHA I successfully created a RAID 1 with 2 18 GB
> SCSI disks (sd1 and sd2). After a system crash the RAID 1 device
> (/dev/raid0d) was fsck-ed successfully and the raid seems to be fully
> operational but as boot message I see now:
>
> RAIDFRAME: protectedSectors is 64
> raid0: Component /dev/sd1a being configured at row: 0 col: 0
> Row: 0 Column: 0 Num Rows: 1 Num Columns: 2
> Version: 1 Serial Number: 72861750 Mod Counter: 43
> Clean: 0 Status: 0
> /dev/sd1a is not clean!
> raid0: Component /dev/sd2a being configured at row: 0 col: 1
> Row: 0 Column: 1 Num Rows: 1 Num Columns: 2
> Version: 1 Serial Number: 72861750 Mod Counter: 43
> Clean: 0 Status: 0
> /dev/sd2a is not clean!
>
> Does the '/dev/sd{1,2}a is not clean!' mean real harm, or is it just
> cosmetic? Especially in the first case how to fix it?
What it's saying is that the mirror disk is not in sync with the primary data
disk. Which means that if the primary disk dies, you *may* lose data.
What you want to do is:
1) unmount the raid partition
2) do a:
raidctl -i raid0
to synchronize the two disks.
3) re-run the fsck, just to be sure.
4) re-mount the raid partition.
Steps 1, 3, and 4 are probably not necessary, but shouldn't hurt.
Since you're running 1.4.2_ALPHA, you will already have the bug fix which
prevents the system from reading any data from the mirror when the
mirror is not in sync with the primary disk. (So as long
as the primary disk doesn't die, your data will be correct).
But you *do* want to get that 'raidctl -i raid0' done right away.
> Another question: With the mounted /dev/raid0d 'raidctl -s raid0' just
> says:
>
> raidctl: unable to open device file: /dev/raid0d
>
> -> Are all these control and reconstruct raidctl commands only
> available for unmounted raid devices??
Are you mounting /dev/raid0d as a filesystem (this is i386, I'm assuming)?
If so, that might be the problem (I've never tried mounting the "entire raid
partition" like that.. I always use a non-"raw" partition like /dev/raid0e).
You can certainly view the status of a raid set with mounted filesystems.
You can even rebuild the parity on a mounted filesystem... even when I/O is
being performed to that filesystem...
[Thanks for waiting until I was back from holidays before asking your
question ;) ]
Later...
Greg Oster