Subject: Re: make eats (too much?) memory
To: None <kpneal@pobox.com>
From: Sean Davis <dive@endersgame.net>
List: current-users
Date: 02/24/2004 20:27:12
On Tue, Feb 24, 2004 at 07:15:51PM -0500, kpneal@pobox.com wrote:
> On Tue, Feb 24, 2004 at 12:48:45AM -0500, Sean Davis wrote:
> > On Mon, Feb 23, 2004 at 10:44:50PM -0500, kpneal@pobox.com wrote:
> > > On Sat, Feb 21, 2004 at 04:06:39PM +0100, Klaus Heinz wrote:
> > > > Is this expected behaviour of "make"? Why can I continue a build of libc
> > > > after "make" reached 32MB of data and aborted? Does it keep state data
> > > > about the already built files or might there be a memory leak?
> > > 
> > > Wasn't make changed some years ago to never free memory? This made
> > > it run faster I think. Unless I'm wrong. 
> > 
> > That sounds like an absurd thing to do (never free memory), whether it makes
> > it faster or not...
> 
> It makes some sense in that most makes either don't run for very long
> or don't have very many targets to build. Recursive makes get their
> own address spaces, and when any make ends the memory it had allocated
> gets freed. 

well, it makes sense that it may very well speed up recursive makes and the
like, but I've always considered allocating memory and never freeing it Bad
Practice, even if the OS does reclaim the memory eventually anyway. This may
just be my own opinion, but I prefer my programs to take care of memory in
every way (ie, allocate it, make sure it isn't overwritten and such, and
free it when the program is done with it)

I have several programs that free() buffers right before they exit(). It
just feels wrong to me to allocate and never free memory.

> It might be useful to have this behavior in make controlled by a
> define that is set on smaller/older platforms. 

agreed.

> That is, if I'm not wrong in my original guess. 

true, i haven't checked the source.

-Sean

--
/~\ The ASCII
\ / Ribbon Campaign                   Sean Davis
 X  Against HTML                       aka dive
/ \ Email!