(I'm testing on 9, but am guessing this is similar on current and will if anywhere be fixed there and not necessarily pulled up to 9.) I'm starting to try out zfs. So far I don't have any data that matters. On a 1T SSD I have wd0[abe] as root/swap/usr as an unremarkable netbsd-9 system, on an unremarkable amd64 desktop with 8G of RAM. I created pool1 with wd0f, which is the rest of the 1T disk, about 850G, not raid of any kind. I created a few filesystems, changed their mount points, changed their options, and mounted one over NFS from another machine, and all seemed ok. (Yes, I realize the doctrine that "use the whole disk as a zfs component" is the preferred approach.) I wanted to rename my pool from pool1 to tank0, for no good reason, mostly trying to do all the scary things while the only data I had was a pkgsrc checkout, but partly having seen Stephen Borrill's report of import trouble. So I did zpool export pool1 and sure enough all my zfs stuff was gone. Then I did, per the man page: zpool import and nothing was found. After a bunch of reading and ktracing, I realized that there is no record of the pool in /etc/zfs or anywhere else I could find, and the notion is that zpool import will somehow find all the disks that have zfs data on them, apparently by opening all disks and looking for some kind of ZFSMAGIC. But it looked at wd0 and not the slices. There was no apparent way to ask it to look at wd0f specifically. So I did cd /dev; rm wd0; ln -s wd0f wd0 which is icky, but then zpool import found wd0f and I could zpool import pool1 tank0 So this feels like a significant bug, and matches Stephen Borrill's report. I think we're heading to documenting this in the wiki, or at least I am. Does anything think I have this wrong? Is anyone inclined to do anything more serious?
Description: PGP signature