Subject: Re: sendmail licensing again
To: NetBSD-current Discussion List <current-users@netbsd.org>
From: Greg A. Woods <woods@most.weird.com>
List: current-users
Date: 12/11/1998 15:30:20
[ On Fri, December 11, 1998 at 13:12:09 (-0500), Todd Vierling wrote: ]
> Subject: Re: sendmail licensing again 
>
> `Your opinion.'  I've found that many (most) developers find it far more
> elegant than frobbing sources to make them look different from a `vanilla'
> distribution.

I wouldn't feel right calling such people "developers".  They're more
likely just "junior programmers".

I don't mean this as a flame or as name-calling (there's nothing wrong
with being a "junior programmer", after all), but *I* would expect
"developers" to be able to deal with the implications of integrating a
piece of software into a larger system without having a problem dealing
with the differences between the original package and the integrated
one.  (Especially when said "developers" are actually working on the
larger system in general, and not just the contributed package.)

99.9% of the changes caused by the *2netbsd scripts SHOULD be just
copying things from one place to another.  Any real code changes should
still be done via a CVS merge.  (Though there is an argument that might
favour using a patch file ala pkgsrc, but let's not cloud the issue.)

The only "tricky" part might be taking the changes and trying to feed
them back to the original author, but that's going to be tricky anyway
if there are a mix of changes not appropriate for the original
distribution (eg. conversion to use of non-portable error functions,
etc. as is common in some NetBSD integrations).

Meanwhile there's nothing elegant about having to go poking through a
makefile to find out where the real sources are, pointing your debugger
there, editing the real files over there, doing your commits over there,
and then if you're sending your object files to yet another directory,
things can get really hairy really quickly.  Even if every such
directory came with a gdb startup file, and some real internals
documentation to say how any why things worked, it still wouldn't be
very elegant.

> : I did a lot of work upgrading all the stuff in /usr/src/contrib
> 
> We don't have a contrib, because we don't want to pull in such gargantuan
> things as perl; that's what pkgsrc is for.

It's called an "example".  I used it to provide empirical evidence that
the scheme does NOT scale well (there's a many-to-1 correspondence
between maintenance effort and imported packages, and the added
maintencance effort can extend to many developers and even third-party
developers when they want to do upgrades), where as the pre-import
"conversion" (or shuffle) scripts do scale much better (they have a
1-to-1 correspondence between packages and maintenance effort, and in
the majority of cases only one developer need be involved).

Having already used "*2netbsd" scripts I was well aware of their
capabilities and I can clearly see how they work better than VPATHing
from the contrib/dist/whatever directory, at least from a maintainer's
point of view.

> : The *2netbsd scripts are indeed, by evidence of hard-won experience, the
> : most elegant way of actually integrating a piece of contributed software
> : into a source tree,
> 
> Again, `your opinion.'

Well, if you don't want to accept my experience as fact, then that's
your perogative....

I originally didn't like the *2netbsd scripts either, but I've learned
that they're better than any other pragmatic alternative I could find.

(Non pragmatic alternatives that are better include using something
other than CVS that can track file renames and such.)

> : Sometimes such scripts aren't even necessary -- just import, merge, and
> : go.
> 
> The complete annoyance with these scripts stems from the fact that they do
> patching/changing of files *before* they get imported.  The point of
> reachover builds is that you can Just Import the sources, and if a new file
> appears in the distrib, just add it to SRCS= in the makefile.  (Both systems
> require a check of the config/#define options necessary to compile the
> package, so that is no different.)

So "just import" the sources into the dist directory, *then* run the
"*2netbsd" script that copies all the files into the right places in a
prototype directory, then re-run the import in the prototype directory,
and finally do the merge in the integrated tree.

But I think that's being too anally retentive about keeping things
exactly they way they were contributed.

The *2netbsd script is a part of the maintenance process.  It is no
different logically than the merge done after the import, or even the
act of un-taring the contributed package.  (That's why the *2netbsd
scripts must be included in the source tree, just like the makefiles.)

You *could* just do the import and then rename files during the local
merge phase, but that's not very well supported by CVS, *and* it hides a
critical part of the process and makes it more difficult to track what's
going on, especially for those without direct access to the CVS
repository.

> : Segregation of source by copyright doesn't necessarily break the
> : implications of a recursive make system, but using VPATH mechanisms to
> : pull sources into a build directory certainly does.
> 
> Not bloody likely.  It has been working since the early days of NetBSD; what
> broke now?

Please re-read what I wrote.  I am claiming that VPATH mechanisms break
the *implications* of a recursive make system.

Remember, one of the biggest reasons the Unix build system has always
used a recursive make system is because VPATH mechanisms didn't exist
and you pretty well had to run make in the directory where the sources
lived if you didn't want your makefile to get really hairy (there's of
course the other implication I mentioned, which is that you can run make
at any level, and possibly the implication that you can copy just that
directory to some other similar machine and run make in it).  Many other
tools work based on this assumption and so far as I know it's not
currently possible to get gdb, for example, to parse a PMake file and
learn where it should find the sources, objects, and such are (or even
to ask make to tell it these things are).  Not to say it can't be done,
just that it doesn't do it that way now, and there's really no good
reason in the normal unix build system to do things that way anyway
(except in the *BSDs in this relatively tiny portion of the contributed
sources).

-- 
							Greg A. Woods

+1 416 218-0098      VE3TCP      <gwoods@acm.org>      <robohack!woods>
Planix, Inc. <woods@planix.com>; Secrets of the Weird <woods@weird.com>