Subject: Re: /usr/pkg/etc vs. /etc
To: NetBSD Packages Technical Discussion List <tech-pkg@netbsd.org>
From: Manuel Bouyer <bouyer@antioche.lip6.fr>
List: tech-pkg
Date: 12/15/1998 15:11:08
On Dec 11, Greg A. Woods wrote
> What do you mean????  Surely you didn't expect I'd manually run the
> installs on each machine, did you?  That's what scripts and remote
> execution are for!
> 
> Either you use rdist/NFS/whatever to distribute the result of a single
> install, or you use rsh/ssh/whatever to invoke a script that does the
> installs on each machine. 

And what if a machine is down at this time ? You need to re-run the install on
this machine only. rdist avoid this.

> The result is mostly the same, but the latter
> mechanism allows the machine's owner to diverge if necessary or desired
> (and allowed), and I consider this last attribute to be critical.  If
> you want to do that your way then you have to copy the /var/db/pkg stuff
> to each machine too

Why ? I don't copy /var/db/pkg, and all my machine run fine. Sure I can't
use the pkg_* commands on the rdisted machines, but what's the problem ?

> (and arrange for it to be on private filesystems in
> the case where sharing is done by NFS/AFS/Samba/etc.).  I think it's
> more elegant to do the installs, and if you ever get caught by a package
> that does something host-specific upon install, such as SSH in
> generating its host key pair, then my suggestion automatically gives you
> that feature for free and any file copy/share scheme forces you do do
> local hacks for each such package.

I would have to do this anyway, because I use rdist for machine-specific
config files too. This way I have all the config of my student room
centralised on one machine (easier to backup), and I'm sure that a machine
which is down at the date of a change will get updated at the next rdist.
This also saves me a lot of time when I reintall a machine after a hardware
failure.


> I.e. why use the package system in
> the first place if you want to avoid doing the "install" step on each
> machine?????

To have precompiled binaries, or get sources that have already been tested
and are known to compile.
But I want all my machines to be identical. For this king of setup, rdist is
much better suited than any command distribution tool.

> 
> As for whether or not you consider the disk space taken by having local
> binary copies to be "expensive", well, if you think it is then you'll
> likely use NFS or similar anyway, so your decision is made for you.  The
> space taken by /var/db/pkg on private filesystems for diskless hosts is
> rather trivial (at least compared to having a private /usr/pkg).
> 
> My point is that if you're going to have local copies anyway, then make
> them truely private copies (i.e. unique installs per host) so that the
> package profile of hosts can be diverged if necessary and permitted to
> do so, and so that install operations unique to a host are supported.
> 

That's a different setup than mine. I'll never never need a tool on a client
that I will not need on my server. I have tools installed on my server that
are not on my clients. These tools are installed with a different $PREFIX,
and rdist can handle that just fine.

Also, did you consider tools that are not in the package system ?

--
Manuel Bouyer, LIP6, Universite Paris VI.           Manuel.Bouyer@lip6.fr
--