Subject: Re: order of compiling/variables for -current
To: Chris G Demetriou <Chris_G_Demetriou@LAGAVULIN.PDL.CS.CMU.EDU>
From: Greg A. Woods <woods@kuma.web.net>
List: current-users
Date: 03/02/1995 13:50:30
[ On Thu, March  2, 1995 at 11:01:26 (-0500), Chris G Demetriou wrote: ]
> Subject: Re: order of compiling/variables for -current 
>
> Because the set of steps necessary changes every so often, and can be
> different on every architecture, mostly.
> 
> You'd need a lot of "right pseudo-targets" to get it right for all
> ports and all "last updated" dates.  Rather than do this, we say "stay
> reasonably current, and read the mailing list archives."

Ah, but that's *exactly* why such information should be encoded directly
in the release.  This is, after all, one of the things that good
configuration management is all about!

The various twiddles for different ports can (and should) be able to be
handled completely transparently by make, since it obviously knows all
that information before it starts to build something (else it couldn't
configure and build the system in the first place).  Of course the
targets and dependencies might get a wee big complex once a full cross
development scheme exists.

As for "last updated" dates, well I may have been partially responding
the wrong question, and making some assumptions about what should really
be in the release.

If you're building -current, then you should have everything "current",
and assuming the developers who hack the tree haven't forgotten anything
in the CM process, then all should build right; and if they have
forgotten something, then hopefully it'll be remembered for tomorrow's
"current", and you're stuck with either hacking a fix for tonight, or
waiting until the official fix is in.

[[ This latter bit of "philosophy" is sort of what -current was based
on, I thought. ]]

In general though, it *should* be the case that there's a fixed
procedure that'll build a complete working system and permit it to be
installed in DESTDIR (or as AT&T used to call it, ROOT), so long as the
appropriate steps are followed (i.e. build a cross-compiler from the new
release, install it where it can be used (re-build it with itself and
re-install), then using the new cross-compiler, build the new system in
the logical order).

Now ideally such a procedure should have no dependencies on the current
system, assuming the compilation environment can be configured, built
and, and used, on the current system.  Making assumptions about the
current system's compilation environment, or even kernel capabilities,
should not be done except in the tightest of disk space requirements, in
which case I'll certainly agree that manual procedures, such as those
which started this thread, will always be necessary.  Fortunately GCC
already has full multi-platform cross-compilation capabilities, so the
hardest part of this job has already been done -- it just has to be
integrated and used.

Naturally once all of the above is true, then it's a much smaller step
to being able to do full cross-platform configuration and builds of the
entire O/S tree.  (I.e. load the source on that blindingly fast DEC
Alpha running OSF/1 at the office, and build i386 (or any other)
binaries with it, even though the boss won't let you install NetBSD on
the alpha itself [and there's gigabytes of disk space, so he'll never
notice the wee bits of NetBSD source!].)

-- 
						Greg A. Woods

+1 416 443-1734			VE3TCP		robohack!woods
Planix, Inc. <woods@planix.com>; UniForum Canada <woods@uniforum.ca>