Subject: Re: Smarter make update / pkg_chk algo
To: Martin S. Weber <Ephaeton@gmx.net>
From: Geert Hendrickx <geert.hendrickx@ua.ac.be>
List: tech-pkg
Date: 05/04/2005 11:50:50
On Wed, May 04, 2005 at 11:16:19AM +0200, Martin S. Weber wrote:
> > And when a build fails and the old package is restored in place, you
> > could suffer from binary incompatibilities if some dependendancies
> > have already been upgraded.  
> 
> If a build fails, the whole 'old' dependancy tree should be
> reinstalled if you really want to go back. I've assumed you carry on
> until the fix is in and the package builds again.

IMHO, it's a *terrible* mess to upgrade part of the packages on your
running system, then see that something breaks, and then restore all
those already upgraded packages back to the older versions.  You can't
do that on a production system.  

My primary concern was to separate the _build stage_ (the long and
boring part) and the actual _upgrade stage_ (the critical part which has
to be monitored).  With or without pkg_comp.  This also makes source and
binary upgrades equivalent.  For binary upgrades you simply replace
"build" by "download".  

> > Using pkg_comp, you can just nuke the to-be-updated packages in the
> > chroot, and rebuild them from scratch, according to pkgchk.conf.
> > Nothing gets rebuild twice, no advanced dependecy ordering required.
> 
> Except you have to build everything, not only those packages which are
> out of date. If you have 20 outdated packages, 50 further which depend
> on them (recursively) and 250 installed, you're building 180 packages
> for nothing. 

Of course not, you only remove the outdated packages from your sandboxed
system.  pkg_chk -r does that.  Then you do pkg_chk -a to rebuild.  

GH

-- 
:wq