Current-Users archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
ZFS on current vs wedges - best practice?
I had forgotten about this little detail, and am not sure about the best
way to deal with it.
I have four disks partitioned with GPT (so I can create a raidframe
raid1 on part of the disk, and use the rest for ZFS), and I made the
mistake (?) of using wedge names to create the zpool. So, after a
reboot (but not the first time! only happened after n reboots), the
wedges reordered themselves, and now my zpool looks like this:
NAME STATE READ WRITE CKSUM
tank UNAVAIL 0 0 0
raidz2-0 UNAVAIL 0 0 0
3140223856450238961 UNAVAIL 0 0 0 was /dev/dk4
1770477436286968258 FAULTED 0 0 0 was /dev/dk5
11594062134542531370 UNAVAIL 0 0 0 was /dev/dk6
dk7 ONLINE 0 0 0
I _ think_ I can figure out how to recover my data without recreating
the entire pool. (I hope - suggestions there welcome as well! Once I
recover this time, I'm going to have to replace the vdevs one at a time
anyway because I just realized they wedges are misaligned to the
underlying disk block size. Sigh.)
However, I'm not sure the best way (is there a way?) to keep this from
happening again. Can I use wedge names? (Will those persist across
boots?) Other than this minor detail, I've been quite happy with ZFS in
9 and -current.
Home |
Main Index |
Thread Index |
Old Index