NetBSD-Bugs archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: kern/57583: zpool import will not find pools on disklabel partitions
Reply to attach some details of a discussion on netbsd-users@
NetBSD ZFS has an optimization to use hw.disknames, to avoid scanning
and probing all entries in /dev/ which can take some time and may have
unwanted side effects
when non-disk devices are scanned.
An alternative could be to keep the hw.disknames optimisation, but
also scan any partitions for devices listed in that case
On Sun, 13 Aug 2023 at 18:00, <abs%absd.org@localhost> wrote:
>
> >Number: 57583
> >Category: kern
> >Synopsis: zpool import will not find pools on disklabel partitions
> >Confidential: no
> >Severity: serious
> >Priority: medium
> >Responsible: kern-bug-people
> >State: open
> >Class: sw-bug
> >Submitter-Id: net
> >Arrival-Date: Sun Aug 13 17:00:00 +0000 2023
> >Originator: David Brownlee
> >Release: netbsd-10
> >Organization:
> >Environment:
> NetBSD forsaken.absd.org 10.0_BETA NetBSD 10.0_BETA (GENERIC) #0: Fri Aug 4 19:55:08 UTC 2023 mkrepro%mkrepro.NetBSD.org@localhost:/usr/src/sys/arch/amd64/compile/GENERIC amd64
>
> >Description:
> zpool import only appears to check top level devices - dk0, wd0 etc, not partitions.
>
> I had a single partition zpool using latter part of a disk (wd3e) - I know this is generally discouraged but I needed the space to shuffle things around and I wanted to keep all the data on zfs.
>
> Once created the zpool will happily persist, but if "zpool export"ed, it cannot be imported, unless a new directory is created which contains the necessary partition devices but not the zfs devices:
>
> # zpool export onyx3
>
> # zpool import onyx3
> cannot import 'onyx3': no such pool available
>
> # zpool import -d /dev onyx3
> cannot import 'onyx3': no such pool available
>
> # mkdir /tmp/dev ; ( cd /tmp/dev ; /dev/MAKEDEV wd3 ) ; zpool import -d /tmp/dev onyx3
>
> # disklabel wd3| tail -6
> 5 partitions:
> # size offset fstype [fsize bsize cpg/sgs]
> a: 419430400 2048 RAID # (Cyl. 2*- 416103*)
> c: 3907027120 2048 unused 0 0 # (Cyl. 2*- 3876020)
> d: 3907029168 0 unused 0 0 # (Cyl. 0 - 3876020)
> e: 3487596720 419432448 ccd # (Cyl. 416103*- 3876020)
> >How-To-Repeat:
> Try to import a zfs pool with one or more components on a disklabel partition
> >Fix:
>
Home |
Main Index |
Thread Index |
Old Index