Subject: Re: Installation of additional software
To: None <current-users@NetBSD.ORG,>
From: Mark Gooderum <>
List: current-users
Date: 09/14/1994 10:34:55
Some quick comments....

First, people seem to be mixing up two is how to bundle
and install software, particularly in such a way that it can be easily
installed, noted, and deinstalled.

Second is how to manage filesystem trees and program locations
in a multiple architechture/OS etc environment.

They really are two seperate issues...

I think the first issue is the one that should be addressed, and it should
be done in such a way that is flexible enough that Joe Systemdude can 
do #2.  This is because every site that has the #2 problem has solved it
differently, and they won't change the way they do it just to be able
to use XYZ's operating system install package neatly.

I came from an envionment where we ran a fairly uniform environment accross
seven different Unixes.  Suffice to say our filesystem tree setup was
pretty complex, although not nearly as much as larger sites I've visited.

The point of all this was that we couldn't really change the way we did 
things just to make "pkgadd" happy for instance.

In general sites tend to install either one package per dir/tree (ala
/usr/local/wp51, /usr/local/frame, etc) or everything in one tree (ala
/usr/local), epecially free software.  The most common reason I here for
seperate dirs is either A.) the product install won't support anything
else, common for commercial software, or B.) they want an easy way to 
cleanup or identify a package.

The pkg*** stuff solves B nicely.  I haven't used FreeBSD's, but Solaris'
is pretty nice.  You can install, uninstall, backout, patch, etc and know
what's been done.  Shared files get a reference count, and there is
even dependancy tracking.  The major down sides are that the key information
is kept in one *huge* text file, which is extremely subject to corruption,
or becoming out of date, and is slow to update.  For instance, the package
tools will barf and refuse to update/uninstall or otherwise
manipulate a package if the contents don't jive with the history perfectly.
The other problem is it doesn't deal well with server/client packages where
you may have a large package that has several common files that go on the
server, but may have a few config files that change on the kernel.

I think the key requirements of any binary package system are:

	install is easy, and flexible, and verifable
		-Ie: support for installing into any tree, seperate or
		 shared, with maybe an (optional) arch/arch independant
	tracking is robust and prevents corruption, or at least can punt
		if things get messed up
	uninstall is easy
	installed packages can be identified
	a file can be easily identified with a given package

Other pluses that are nice icing but I don't think are essential:

	post-install/post-deinstall scripts for needed configuration/
		customization (or unXXX)
	verification - compare a package bundle against an installed package,
		useful for security and recovery reasons (which files did 
		I loose on that disk crash or site breakin), also for which
		files did I need to change for customization
	reconstruction of package tracking info...given the install media/
		package and the same input to the install package program, 
		it should be possible to reconstruct the tracking info of
		an already installed package
	server versus client side packages (maybe with an update to a server
		side config that tracks which clients depend on the server
	file tracking supports the concept of immutable files versus those
		that may change (so for instance it will complain bitterly
		if a binary changes, but maybe warns and offers to make
		a backup if a config file changes and is being overwritten)
	patching support - ability to update add files and maybe save orignals
	shared file reference count - ability to share files between packages
	package dependancy tracking (needs to include versioning to be 
	versioning of packages
	arch versus arch independant parts support
	moving - move an installed package to a new location iwth local 
		customization intact and update tracking (much easier than

The Solaris pkgadd package does most of these except moving, tracking
info robustness, server/client tracking), and immutable/changeable files.

I don't know much about the FreeBSD pkg stuff...

Note that all of this is from the point of view of the user for a binary
package.  There are issues for developer's tools for creating a package.

For sources I'm very much of the school of get changes incorperated for
native support of NetBSD in the package.  I'm also opposed to totureing
the build process into the BSD Makefile format, this is especially a big
loss for autoconfig based packages.

What would be useful is a set of BSD -> Native source "bridge" makefiles.
These would exist in the parent dir of the package (to avoid makefile
name collision).  The parent makefile would include a sub-dir specific 
makefile that would then invoke the right make (many packages, like many
Imake makefiles and some autoconf built makefiles won't work with BSD make,
they need GNU make) or other command on the appropriate target in the

The three "templates" for this I can think of for this would be a minimal
common makefile (that just passed a make, make install, make clean type set
of make actions to the subdir makefile).  The other two would be for autoconf
based packages and for Imake based packages.  The second two would be more
complex and include a pre-make target that would run xmkmf or configure 
appropriately (getting the needed arguments from a common config, so that
you could globally specifiy a --prefix and --exec-prefix for instance), and
other bits (like a make distclean, re-configure if you're building a 
different arch).  Note that the autoconf based version could support multiple
arches w/o reconfiguring using arch specific obj dirs (sound familiar?) and
the --srcdir directive to autoconf.

Well, nuff said for now,