tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

ZFS weird behavior during export => import with wedges



Greetings everyone,

I have been using ZFS regularly since -9 without much hiccup, except for the usual renaming of devices with hardware changes that require a bit of fiddling via export/import. In -10 I decided to give it a try with wedges to get familiar with these, however I am getting a strange behavior regarding the export => import of pools.

Long story short: with wedges, I need to create a symlink in a separate directory (like /etc/zfs/ ) to successfully import exported pools through a "zpool import -d /etc/zfs"; using /dev directly did not work. Even a "zpool import -d /dev" does not change much. Illustration of the culprit follows.

FWIW the dk5 is a wedge residing on a cgd(4), but using GPT partitions directly shows a similar behavior.

[ 1211.115667] cgd0: GPT GUID: 87f71c64-d592-4e5c-b5e9-3a84cf0f8f84
[ 1211.115667] dk5 at cgd0: "saves", 1000215076 blocks at 34, type: zfs

minas-tirith# uname -a
NetBSD minas-tirith 10.0_BETA NetBSD 10.0_BETA (GENERIC) #0: Wed Feb 1 19:00:10 UTC 2023

minas-tirith# zpool status
  pool: saves
 state: ONLINE
  scan: none requested
config:
	NAME            STATE     READ WRITE CKSUM
	saves           ONLINE       0     0     0
	  /dev/dk5      ONLINE       0     0     0

minas-tirith# zpool export saves

minas-tirith# zpool import saves
cannot import 'saves': no such pool available

minas-tirith# zdb -l /dev/rdk5
--------------------------------------------
LABEL 0
--------------------------------------------
    version: 5000
    name: 'saves'
    state: 1
    txg: 20516
    pool_guid: 3346903533930117211
    hostname: 'minas-tirith'
    top_guid: 8487525857792867729
    guid: 8487525857792867729
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 8487525857792867729
        path: '/etc/zfs/dk5'
        whole_disk: 0
        metaslab_array: 37
        metaslab_shift: 32
        ashift: 9
        asize: 512105381888
        is_log: 0
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
[and so forth, all ZFS labels are there and valid]

minas-tirith# ln -s /dev/dk5 /etc/zfs/dk5
minas-tirith# zpool import -d /etc/zfs saves
minas-tirith# zpool status
  pool: saves
 state: ONLINE
  scan: none requested
config:

	NAME            STATE     READ WRITE CKSUM
	saves           ONLINE       0     0     0
	  /etc/zfs/dk5  ONLINE       0     0     0

errors: No known data errors


FWIW this is a vanilla NetBSD-10 install on this host, not an upgrade from a NetBSD-9.

Comments appreciated. I can live with that for the moment, but this feels weird to me (especially the difference of treatment between "-d /dev" and "-d /etc/zfs"). FWIW I don't get this behavior on my other hosts/NAS, however I use entire disks (wd(4)) for ZFS on these, not wedges.

Thanks!

--
Jean-Yves Migeon
jym@
https://www.NetBSD.org


Home | Main Index | Thread Index | Old Index