Subject: Re: devfs, was Re: ptyfs fully working now...
To: Chapman Flack <>
From: Bill Studenmund <>
List: tech-kern
Date: 11/19/2004 15:12:52
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Fri, Nov 19, 2004 at 03:40:18PM -0500, Chapman Flack wrote:
> Bill Studenmund wrote:
> > I think it would be messier in two ways. 1) you have two file systems
> > involved. So you have up to two vnodes involved in everything. And you
> > have three classes of vnodes: devfs-only vnodes with no underlying node,
> > devfs device nodes with an underlying node, and devfs other-nodes with =
> > underlying vnode. While an overlay file system can do that, it's messy.
> Point taken, but there are choices in implementation.  One choice is to

Well, we are discussiong implementation options. So comments about=20
implementation issues seem appropriate. :-)

> scan the persistent/disk entries once at mount time, use the information
> in synthesizing the devfs nodes, and thereafter have only one kind of
> devfs node that you care about; they would look just the same as if you
> had scanned a binary file once at mount time and used that information in
> synthesizing the devfs nodes.  The only other place you care about the
> persistent/disk entries is all encapsulated within a persistUpdate()
> function that you call in devfs after applying a chmod/chown/etc to
> your own devfs node.  In the file-based schemes, persistUpdate writes
> something into the file instead, or tickles a daemon to do it.  So the
> file or the underlay fs are really just two representations for the
> persistent data; looking at the code you shouldn't be able to tell which
> is being used unless you look inside persistScan and persistUpdate to see
> what they do.

Actually, your comment made me realize one issue with the different
representations. As we support more & more hot-swap devices, over time, we
may end up with multiple "devices" (say wedges/partitions on mobile disks)
with the same name. While obviously we can't have them active at the same=
time, the fact that one wedge named "foo" was readable by user Bob doesn't=
necessarily mean that every wedge named "foo" should be.

Obviously we'd need to be careful about how we handle this case so that=20
the system stays manageable (so the admin stays sane). But if we use name=
as a primary key, which we'd do with the nodes-in-real-fs case and in the=
common mtree case, we can't handle this at all.

Take care,


Content-Type: application/pgp-signature
Content-Disposition: inline

Version: GnuPG v1.2.3 (NetBSD)