Subject: Re: Kernel compile 100 times faster (this sound provocating,doesnt it?:)
To: None <current-users@NetBSD.ORG>
From: der Mouse <mouse@Collatz.McRCIM.McGill.EDU>
List: current-users
Date: 02/02/1996 13:56:37
> When compiling a currentkernel on a 030/25Mhz this can need many hours.
> Most of the time is wasted in including the same files.

What basis do you have for this statement?  In my experience, the
preprocessor is a relatively small fraction of the total time for a
compile.  (I usually run the compiler in a way that, among other
things, lets me see when each phase of the compilation begins.)

Now mind you, the kernel is perhaps not a typical compile.  But I still
find a statement that "most" of the time is spent preprocessing to need
supporting evidence.  I am quite certain the preprocessor is nowhere
near 99% of the time (which it would have to be for your Subject: to be
accurate).

> I imagine to speed up this by joining all C-Sources by a script used
> instead of gcc [...] and cating this to one file.  After this
> multiple includes should be removed and the whole file feed to gcc.

> Are there any very important restrictions that dont allow to do this?

I don't know, never having tried it, but here are two of the things
that come to mind:

- Suddenly, "static" no longer restricts a declaration's visibility to
   just the file it appears in.  I don't know whether there are
   actually any conflicts between statics in one file and statics in
   another, but there certainly could be.

- Many files #define things, like manifest constants, and then leave
   them defined forever, letting the definitions die at the end of the
   compilation unit.  Your scheme would have to arrange for things to
   be #undeffed on occasion to prevent conflicts between one file's
   private #defines and another's.

It's a cute idea, though.  Go ahead and try it; perhaps there are fewer
of the above conflicts than I fear.

					der Mouse

			    mouse@collatz.mcrcim.mcgill.edu