Current-Users archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Best practice for setting up disks for ZFS on NetBSD
On Thu, 3 Dec 2020 at 08:31, Tobias Nygren <tnn%netbsd.org@localhost> wrote:
>
> On Thu, 3 Dec 2020 00:30:17 +0000 David Brownlee <abs%absd.org@localhost> wrote:
> >
> > In the event of disk renumbering both are thrown out, needing a "zfs
> > export foo;zfs import foo" to recover. Is there some way to avoid that?
>
> You can use named gpt wedges and symlink them to a stable path in /dev.
> I did this manually for my storage box setup but was later informed we
> have devpubd(8) which is capable of doing this.
Aha - *this* is nice. I recall seeing devpubd mentioned a while back
but never took a look. More fool me!
One small issue - rcorder shows rc.d/devpubd running quite late in the
boot process - much later than rc.d/zfs.I wonder if it should be
adjusted to run directly after rc.d/root?
On Thu, 3 Dec 2020 at 08:45, Hauke Fath <hauke%espresso.rhein-neckar.de@localhost> wrote:
>
> On Thu, 3 Dec 2020 00:30:17 +0000, David Brownlee wrote:
> > - Wedges, setup as a single large gpt partition of type zfs (eg /dev/dk7)
> > - Entire disk (eg: /dev/wd0 or /dev/sd4)
>
> "Traditional" (solarish) zfs lore recommends giving zfs the entire
> disk, unpartitioned, since it can make more efficient use of it then.
> Zfs will generally be able to re-assemble a volume after renumbering
> components - I have seen it do that on OmniOS after swapping an Areca
> RAID controller in JBOD mode out for a plain SAS controller.
That was one of my motives behind preferring the whole disk approach
(in addition to not cluttering up iostats with nearly a dozen
otherwise unwanted dk entries). However, if on NetBSD disk
renumbering can be handled by a wedge setup but not whole disks, then
I'll be switching to wedges...
On Thu, 3 Dec 2020 at 02:47, HIROSE yuuji <yuuji-netbsd%es.gentei.org@localhost> wrote:
>
> >> On Thu, 3 Dec 2020 00:30:17 +0000 abs%absd.org@localhost (David Brownlee) said:
> >
> > What would be the best practice for setting up disks to use under ZFS
> > on NetBSD, with particular reference to handling renumbered devices?
>
> Creating raidframe for thoset wedges or disks and "raidframe -A yes"
> would be helpful to create stable device-name for zpool.
>
> I prefer to use dummy raidframe even if the host has only single device
> to make HDD/SSDs bootable when they attached to USB-SATA adapters.
Ahh, thats cute - I don't know why that didn't occur to me :) (Used a
similar setup for PCengines ALIX/APU USB images for older NetBSD
releases - for small enough disks on more recent NetBSD releases we
switched to plain ROOT{a,b,e} in fstab)
On Thu, 3 Dec 2020 at 07:18, Malcolm Herbert <mjch%mjch.net@localhost> wrote:
>
> As far as I understand it, ZFS vdevs have their own ID, so they can be laid out correctly no matter the OS device each are discovered on ... wouldn't that make a raidframe wrapper redundant? it would also mean the zpool vdevs couldn't be used on other systems that understand ZFS because they're unlikely to understand raidframe ...
Thats what _should_ happen, but I believe ZFS relies on a persistent
device identifier such as /dev/rdsk/c0t5d0s2 or
/dev/disk/by-uuid/48e3b830-ff84-4434-ac74-b57b2ca59842 but NetBSD
doesn't directly provide one. However... Tobias pointed out devpubd
which apart from the rc.d order is close to perfect with this and
wedges.
On Thu, 3 Dec 2020 at 00:38, Brian Buhrow <buhrow%nfbcal.org@localhost> wrote:
>
> hello David. In the absence of other variables, I'd suggest using
> wedges. That gives you the ability to replace disks that go bad with
> differently sized disks in the future, while still retaining your zfs vdev
> sizes, something zfs likes a lot.
> Also, I'm pretty sure zfs recovers fine from wedge renumbering, at
> least it does under FreeBSD, much like raidframe does when it's
> autoconfiguring.
> I should say that while I have a lot of experience with zfs under
> FreeBSD, I've not used it much under NetBSD, mostly due to its instability,
> which is apparently now becoming much less of a problem -- something I'm
> very happy about.
I've had one box which managed to trip over a bunch of issues creating
and mounting zfs filesystems (now all resolved by others and pulled up
into -9), but I've been using it for a while now on a few other
machines, both mirrored and simple multi disk pools and have been very
happy overall.
Thanks for all the replies :)
David
Home |
Main Index |
Thread Index |
Old Index