Current-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: RAIDframe question



Thank you all for being so quick to come forth with answers on this!
Still not sure why my first set went strange on me but this certainly
explains why everything else which seems slightly amiss actually
really isn't!

The more you think you know... :)

On Tue, Jun 16, 2020 at 1:19 PM Brian Buhrow <buhrow%nfbcal.org@localhost> wrote:
>
>         hello.  If you reboot again, the raid2 will probably look as you
> expect.  The general procedure for disk replacement is;
>
> 1.  raidctl -a /dev/newdisk raidset
>
> 2.  raidctl -F /dev/baddisk raidset (fails the bad disk, uses  the spare and reconstructs to it)
>
> 3.  Raid is left with a used_spare, but all is wel.
>
> 4.  Reboot.  All components become optimal.
>
>         It has long been my desire that once a spare is used, it get
> automatically promoted to optimal without the interveening reboot.  I
> probably could have made this change with Greg's blessing, but I never did
> the work.
>
> Hope that helps.
> -Brian
>
> On Jun 16, 12:18am, Greywolf wrote:
> } Subject: Re: RAIDframe question
> } I don't know what I did to get that volume to recover but ripping
> } it apart and placing the good component first on reconfiguration
> } produced a good volume on a rebuild.  As I recall it looked a lot like this:
> }
> } Components:
> }           component0: failed
> }             /dev/wd1c: optimal
> } Spares:
> }             /dev/wd0c: spare
> } component0 status is: failed. skipping label
> } Component label for /dev/wd1c:
> }    Row: 0, Column: 1, Num Rows: 1, Num Columns: 2
> }    Version: 2, Serial Number: 1984, Mod Counter: 7232
> }    Clean: No, Status: 0
> }    sectPerSU: 128, SUsPerPU: 4, SUsPerRU: 1
> }    Queue size: 120, blocksize: 512, numBlocks: 976772992
> }    RAID Level: 1
> }    Autoconfig: Yes
> }    Root partition: No
> }    Last configured as: raid1
> } /dev/wd0c status is: spare.  Skipping label.
> } Reconstruction is 100% complete.
> } Parity Re-write is 100% complete.
> } Copyback is 100% complete.
> }
> } On the other hand, I have the following showing up after
> } a rebuild (different volume, "raid2", mirrored 2TB disks):
> }
> } Components:
> }             /dev/dk0: optimal
> }           component1: spared
> } Spares:
> }             /dev/dk1: used_spare
> } Component label for /dev/dk0:
> }    Row: 0, Column: 0, Num Rows: 1, Num Columns: 2
> }    Version: 2, Serial Number: 3337, Mod Counter: 468
> }    Clean: No, Status: 0
> }    sectPerSU: 128, SUsPerPU: 1, SUsPerRU: 1
> }    Queue size: 100, blocksize: 512, numBlocks: 3907028992
> }    RAID Level: 1
> }    Autoconfig: Yes
> }    Root partition: No
> }    Last configured as: raid2
> } component1 status is: spared.  Skipping label.
> } Component label for /dev/dk1:
> }    Row: 0, Column: 1, Num Rows: 1, Num Columns: 2
> }    Version: 2, Serial Number: 3337, Mod Counter: 468
> }    Clean: No, Status: 0
> }    sectPerSU: 128, SUsPerPU: 1, SUsPerRU: 1
> }    Queue size: 100, blocksize: 512, numBlocks: 3907028992
> }    RAID Level: 1
> }    Autoconfig: Yes
> }    Root partition: No
> }    Last configured as: raid2
> } Parity status: clean
> } Reconstruction is 100% complete.
> } Parity Re-write is 100% complete.
> } Copyback is 100% complete.
> }
> } I've been thru enough different results it's hard to tell whether that is sane;
> } I would have expected /dev/dk1 to have shifted up to 'optimal' and component1 to
> } have vanished.
> }
> } On Sat, Jun 13, 2020 at 11:48 PM Martin Husemann <martin%duskware.de@localhost> wrote:
> } >
> } > On Sat, Jun 13, 2020 at 09:44:35PM -0700, Greywolf wrote:
> } > > raidctl -a /dev/wd0c raid1
> } > >
> } > > raidctl -F component0 raid1
> } >
> } > I would have expected that to work. What is the raidctl status output
> } > after the -a ?
> } >
> } > Martin
> }
> }
> }
> } --
> } --*greywolf;
> >-- End of excerpt from Greywolf
>
>


-- 
--*greywolf;


Home | Main Index | Thread Index | Old Index