Subject: Re: Changing order of update process
To: D'Arcy J.M. Cain <>
From: Frederick Bruckman <>
List: tech-pkg
Date: 12/25/2002 10:26:15
On Wed, 25 Dec 2002, D'Arcy J.M. Cain wrote:

> On Tuesday 24 December 2002 09:25, Thomas Klausner wrote:
> > On Tue, Dec 24, 2002 at 08:29:24AM -0500, D'Arcy J.M. Cain wrote:
> > >     deinstall --> build --> install
> > >
> > > to
> > >     build --> deinstall --> install
> >
> > The problem with this was (before buildlink2) that some packages
> > found their own headers of the installed old version and then had
> > trouble building -- I guess this is much better now _with_ buildlink2,
> > but since USE_BUILDLINK2 is not the default for all packages (yet?)
> > we are not ready to switch the order yet.

That's not really the obstacle. Even before buildlink, only a handful
of packages were affected. What's more, if a package is broken in that
way, anything you to with "make update" doesn't fix it -- plain "make"
with the old package installed is still going to break, and people are
still going to complain about that.

> I guess I'm a little dense this Xmas morning but I don't understand why that
> has any bearing on the order within the package.  Sure, if there is a
> dependency then you pop over to the dependent packages recursively and do
> those in the same order.  You still don't have to delete anything until you
> are ready to install.

Hmm.. No you can't. Say we're talking about "pngcrush", which depends
on "png", which is just updated. "pngcrush" will be built against the
*installed* version of "png", so to build it properly, you need to
install the new version of "png". But you can't (presently) install a
package without first deleting it, and deleting all it's dependents,
too (which is what my effort is all about), so you're forced to delete
"png", and therefore "pngcrush", before you've even tried to build it.

As has been pointed out, you could "make package" to a ${DESTDIR}.
With that, you'd install nothing to your running system until you're
all done. Then the window of unavailability is only that which it
takes to do the "pkg_add"'s of the resulting packages. (I've never
done this, myself.) The chief disadvantage, for something like "gnome"
or "kde", seems to be that you'd have to fully populate the ${DESTDIR}
with all dependencies. Then, too, you can't actually run any of the
build tools in the ${DESTDIR}, but you might be able to get around
that by adjusting ${PATH} and ${LD_LIBRARY_PATH}.

> The worst case is that the dependent package gets built and
> installed and the one you are building fails and leaves the old one
> in the system.  That can't be any worse than losing the package
> altogether and will often still work.

If you're managing to do that, you're using some hack to install a
package on top of the old one, like ${FORCE_PACKAGE_REGISTER}. The
clear advantages of *my* hack over *that* hack are

1) All installed files remain accounted for, that is, they continue to
be registered to some package. (View with "pkg_info -F -e _file_".)

2) Assuming that you'll want to update all dependents eventually, you
can see what still needs to be done with "pkg_info -R "*-SO-*".