Subject: Re: Is gcc slow? Or is our gcc slow?
To: UNIX hacker and security officer <greywolf@defender.vas.viewlogic.com>
From: Erik M. Theisen <etheisen@teclink.net>
List: current-users
Date: 04/09/1996 21:23:18
>I believe that reason is that /var on most sane systems% has more space
>into which to shove those temporary files than does /tmp, even if /tmp
>happens to be its own filesystem.

No, I don't think so.  As of version 2.6.x (???) gcc began using a
deprecated macro "P_tmpdir" as one of the first places it tries to
dump it's tmp files.  On NetBSD this is "/var/tmp" on StunOS it's
"/usr/tmp".  Checkout stdio.h if you have the time. This is really
bogus on gcc's part.

If /tmp isn't big enough then gcc should dump and say why.  You can
always use TMPDIR to override it if this happens.  "/var/tmp" is for
temporary files that are meant to survive a reboot, i.e. vi's
recovery stuff.

This is really a simple thing to fix though.  I have the mods in the
OpenBSD tree already that use "/tmp".

Of course if you feel like doing it yourself, it's easy enough
to grep through gcc and 
	"#ifdef 0 /* XXX -- gcc's using deprecated macro, bummer */"
around the one or two lines here and there that tell it to use
"P_tmpdir".  Then everything should be fine.

By the way, a 12MB mfs based /tmp seems to be good enough for a
single user system compiling the world, maybe even X.