Subject: Re: sandbox builds + Re: Removing All Packages
To: None <>
From: D'Arcy J.M. Cain <>
List: tech-pkg
Date: 11/06/2004 10:47:05
On Sat, 6 Nov 2004 09:44:18 -0500
Douglas Wade Needham <> wrote:
> Quoting D'Arcy J.M. Cain (
> > On Fri, 29 Oct 2004 14:00:49 +0700
> > By the way, once in a while I like to make sure that I have a clean
> > installation of pkgsrc but I can't necessarily wipe them all out
> > while I rebuild.  KDE and OpenOffice take days just by themselves on
> > some machines.  What I do is like above except that I just wipe out
> > the database.  The steps I do are;
> > 
> > 1. rm -rf /var/db/pkg
> > 2. Build the packages.  I have a script that builds the ones I want.
> > 3. Remove any old files from pkg using find(1)
> > 4. rm -rf /var/db/pkg again
> > 5. Build packages again in case there are some older files installed
> > by tar(1).
> > 
> > I may lose a few files at step 3 temporarily but mostly I am never
> > without a system while I do this.  The only problem I have is that
> > some packages fail if the files exist but they think that they don't
> > have to worry about it since it is a new installation.  I just
> > handle those manually.  One of these days I will add a little post
> > build stuff to handle those cases.
> Darcy, I have to wonder why are you going through all this
> risk/hassle??  I have been using a technique pretty much unchanged for
> around a decade now, and IMO it works great.  And it has an added
> advantage when doing more than one machine which has the same SW
> installation.  The solution is to build things in a sandbox, then use
> rdist to push the new stuff into place.  The only real hassle is
> developing the list of files to exclude, and double checking that list
> when doing a major upgrade.  But I have used it to do upgrades on
> machines actively handling traffic, such as my firewall, my NFS server
> and my bastion host, and never had a problem, other than the time I
> forgot to set net.inet.ip.forwarding to 1 on my firewall. ;)

I'm not so sure that we are that far apart in our ideas.  While I have
dozens rather than hundreds of servers I have many of the same issues
including multiple data centres.  I just look at one machine, my
"staging" machine, as my sandbox.  If disaster strikes that box I am not
dead.  I just have to rebuild.

> - Doing an rdist to a large number of machines can eat at your network
>   resources and does take time.  This is a larger problem when you are
>   dealing with 1200+ machines with several hundred in locations like
>   Munich, London and Paris and you are in Columbus, and while the
>   rdist utility helps some, but it could be improved.

I fix this by having secondary staging machines in the remote centres. 
My promotion script syncs up the local servers and just one server in
each centre.  That server is used to sync the local centre.

D'Arcy J.M. Cain <>