Source-Changes-D archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: CVS commit: src/etc



I apologize for failing to understand "zfs legacy mount" and incorrectly
associating it with how I usually encounter the word legacy.

I now understand that you meant to separate:

  zfs's preferred path of having mountpoints stored as volume
  properties, which is a different way of specifying what is mounted
  where than everything else, but makes sense in a world with many zfs
  volumes

  having a zfs volume where instead of the normal zfs way, there is an
  fstab entry

So having re-thought I would say:

  It makes sense to have a boolean "critical" property (the name you
  suggested is fine) for zfs volumes that specify a mount point, so that
  such volumes would be mounted in mountcritlocal.  I am 100% happy for
  someone to add that and see no big problems.

  It makes sense to just put zfs volume mountpoints in
  critical_filesystems_local, if those volumes use the fstab method
  instead of mountpoint properties (i.e., are "zfs legacy mounts").

  I think this is tricky if there are multiple pools and some don't come
  up.  But I think it's ok if the main path is that one should have all
  critical zfs filesytems on the same pool as root, with root on zfs.
  With root not on zfs but say /usr and /var on zfs, there needs to be
  some way for the boot to fail if they aren't mountable, just like if
  they were in fstab, if the pool doesn't attach and thus the critical
  variable aren't readable.  That might mean "if root is not zfs, and
  something in critical_fileystems_{local,remote} is in zfs, then those
  things have to use zfs legacy mounts".  It might mean having
  "critical_zfs_pools_{local,remote}" which will fail the boot if they
  are not attached at the mountcritlocal/mountcritremote stage, so that
  the critical properties are reliably there.

  I am opposed to deciding that all zfs filesystems should be treated as
  critical (in a world where we have not abolished the notion).

  We could have a discussion about why we even have the concept of
  critical filesystems, but if so that should not be about zfs and
  should be in a new thread on tech-userlevel.  And, I think it really
  isn't strongly releated to this discussion.


for background, I used to understand the critical filesystem scheme
better, but I'll briefly say (projecting to modern rc.d) that I think
it's about sequencing getting enough filesystems mounted to be able to
hit the NETWORKING rc.d barrier.  Consider a diskless workstation with
separate / and /usr (because /usr is shared across all 10 Sun 3/50s :-)
that also needs to mount some other filesystem from a server beyond the
LAN, which requires running routed.  Sorry if that gives yuo bad
flashbacks to the 80s :-)

In modern NetBSD I put /var and /usr in critical_fileystems_local,
because I want route6d to start, and that's in /usr/sbin.  Arguably
route6d should be in /sbin, or NETWORKING should be split into things
needed to talk to the local link vs the broader network, but I have
avoided digging in because adding a line to rc.conf is easy, and moving
route6d would be actual work.

Greg

Attachment: signature.asc
Description: PGP signature



Home | Main Index | Thread Index | Old Index