Subject: Re: tcsh static/dynamic mem usage
To: netbsd-current-users <current-users@sun-lamp.cs.berkeley.edu>
From: Thomas Eberhardt <thomas@mathematik.uni-Bremen.de>
List: current-users
Date: 11/21/1993 22:27:04
> 
> > 10198 root       3    0  936K  644K sleep   0:00  3.30%  1.66% tcsh.dynamic
> > 10196 root      18    0  552K  428K sleep   0:00  0.90%  0.59% tcsh.static
> 
> > The dynamically linked version uses much more memory than the statically
> > linked version (huh?). Or is 'top' just confused by the shared memory
> > pages?
> 
> I can't explain the 400K difference in the 'process size' field.
> The increase in `resident' size is perfectly explainable from the
> of the run-time linker actions. ld.so itself is about 70K in size, it uses
> stack space, and it touches the data area of shared libraries all before
> it gets around to calling `_main()'.
> 
> A better start to evaluate shlib overhead would a "null" program, something
> like "main(){return 0;}" or "main(){sleep(100);}". I did some quick tests
> with the former some weeks ago, and found that it runs three times slower
> when linked dynamically (16ms vs 0ms user time, 40ms vs. 20ms system time).
> 
> -pk
> 

It's even more dramatic when you do an "unlimit stack".  Every process then
when will show >8000K virtual process size.

It seeems that the virtual size includes the entire stack segment since
the runtime loader mmap's some memory at the bottom end of the stack.

So the 400K difference is just due to the default 512K stack limit.

-- 
thomas@mathematik.uni-Bremen.de | Centrum für Complexe Systeme & Visualisierung
Thomas Eberhardt                | Universität Bremen, FB 3, Bibliothekstr. 1
Kölner Str. 4, D-28327 Bremen   | D-28359 Bremen, Germany
Home Phone: +49 421 472527      | FAX: +49 421 218-4236, Office: 218-4823

------------------------------------------------------------------------------