Subject: Re: where should stand-alone daemon binaries really live?
To: NetBSD Userlevel Technical Discussion List <firstname.lastname@example.org>
From: Greg A. Woods <email@example.com>
Date: 08/07/2000 14:54:29
[ On Monday, August 7, 2000 at 11:17:02 (-0700), Greywolf wrote: ]
> Subject: Re: where should stand-alone daemon binaries really live?
> Please don't look to cripple the sysadmins who do know what they are
Now *THAT* is a straw-man argument! If a sys-admin knows what he or she
is really doing then they'll know how to type "/usr/libexec/inetd -l"
when they've accidentally killed it off and have to restart it! If they
don't know that then they shouldn't have the keys to the system!
> # The technical reason for this is that at least some of these daemons
> # often, or even normally, have command-line parameters without which they
> # will not function as required for a normally running production system
> # (such as '-l' for inetd itself).
> inetd does not require -l to run.
Yes, it sure as hell does in "a normally running production system" that
has defined it to run that way! The logging is a *requirement* and
while this is a rather academic argument, it rapidly becomes a very
real-world one when you mix a dozen more complex daemons and a shared
sys-admin position into the pie.
> Sigh. Looks like I get to add /usr/libexec to my $PATH. I was hoping
> to avoid that.
That would be possible, of course, but IMHO rather WRONG! ;-)
The correct solution for non-touch-typists and lazy folk would be to add
/etc/rc.d to their PATH.
> I think you're looking at the picture from a different angle than
> most of us see it. Well, I, at least (sorry if I misspeak for most
> of you :-) see it quite differently and less arbitrary than you do.
> It's not just "running from inetd" but "normally reserved for being
> started by another program in general", whether it is inetd or anything
> else that might want to do that. The programs in /usr/libexec are
> also more or less one-off daemons that aren't going to stick around
> once their function is finished -- getty exits (or execs its way out)
> once its function is finished and then is restarted to wait for another
> connection, for example. The rest are started on demand, rather than
> being always active.
Technically there's absolutely no difference between starting getty from
init and starting it from a command-line shell. It still does exactly
the same things.
Technically a program like sshd is really two programs mushed together
(inetd and sshd) as a (very valuable -- that's why it was done and
hasn't been undone since) performance hack. Perhaps named doesn't quite
fit this model, though in a demonstration system the database used by a
named cache would possibly be isolated in a separate module, perhaps
even as an on-disk persistent database, and a separate program would be
started to handle every incoming lookup request.
Technically the only difference between starting telnetd by hand and
when it's started from inetd is that it assumes that its std* file
descriptors are connected to a socket that's ultimately connected to a
client telnet program on the "remote" end. The additional code to
create a socket, bind it to a local address & port, and listen for and
accept a single connection is already in telnetd, and could be so added
to any of the single-instance daemons. A program like netcat could also
easily be modified to provide these services. Even without you can
sometimes even re-use your existing telnet connection to restart some
other daemon and thus fool it into thinking it's been called from inetd.
All I'm saying is that even inetd itself is really just a performance
hack of a different nature.
> Looking at it from an admin point of view, it actually declutters /usr/sbin
> by quite a bit, something which I find rather attractive.
As would moving all the other *d binaries that should not normally ever
be started by hand! ;-)
For those of us who use command and filename completion in our
interactive shells this would have many benefits!
> All you pundits who are advocating this change are missing the point:
> "It isn't broken. Don't fix it."
but it *is* broken! ;-)
There should only ever really be one truly stand-alone daemon outside of
the kernel. Maybe it starts several unrelated daemons that do much like
what it does for different subsystems, but they are still controlled
directly by the ultimate master daemon.
What's easier: a) waiting for a child process to die and restarting it
if it should be restarted; or b) writing a program that searches the
process table looking for the expected parent of one or more daemons and
restarting them if they're not found?
What's "smarter", (a) or (b) above, with respect to predictability and
control? I.e. which is easier to control and harder to get out of
control on a per-service basis?
Which is more reliable?
Greg A. Woods
+1 416 218-0098 VE3TCP <firstname.lastname@example.org> <robohack!woods>
Planix, Inc. <email@example.com>; Secrets of the Weird <firstname.lastname@example.org>