Subject: Re: 32 bit dev_t
To: Chris G. Demetriou <email@example.com>
From: Darren Reed <firstname.lastname@example.org>
Date: 01/16/1998 11:43:56
In some mail I received from Chris G. Demetriou, sie wrote
> > > There's only so much in the way of semantic freedom a given device
> > > should have in interpreting its device nodes. 'real' devices should
> > > use 'dv_unit()' to figure out which device unit is being accessed.
> > > dv_subunit() usage should be device dependent, but should be
> > > as consistent as possible within classes of devices.
> > >
> > > I'm not even so convinced that things like BPF, which BSDI allowed to
> > > slide by using the old minor() should be allowed to do so.
> > And what would you have had them do in this case instead of that ?
> I think i'd say "use dv_unit() for that."
Why not dv_subunit() ? And is there really any difference ?
To me, making it use dv_*unit() is a change which doesn't achieve anything
(unless we radically alter the structure used for BPF devices in the kernel)
and makes it just that much more harder to import updates from LBL.
IP Filter too uses minor() in a way that doesn't (to me) make much
difference if it's unit/subunit except that it's another change to be delt
with when making code compatible across multiple platforms.
I'd like to see minor() stay and work with the new 32bit dev_t's.