tech-embed archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Shrinking NetBSD - deep final distribution re-linker

On Thursday 21 October 2004 09:44, Jared Momose wrote:
> netbsd loads pages of an excutable into ram when they are needed. so,
> although your multi-call binary is quite large, only those portions
> associated with the command you run will be coppied to ram, and only after
> they are accessed.
> now, to make things even more complicated (but for the better!) since the
> multi-call binary is a *single file*, even though you hook into different
> commands of the binary through different hard links. this means that
> read-only pages like text and read-only data will be shared among every
> command from the multi-call binary.
> i came up with an equation a while back for the max ram usage of a
> crunchgened binary and it went something like this:
> max ram usage = sum(0, n, STACKn + DATAn + BSSn + HEAP_SIZEn) + RODATA +
> this is worst case. also note that with a multi-call binary we are saving a
> few pages per process that would go to dynamic linking glue - not a lot,
> but when n gets big it might make a difference.
> for a system with swap space such considerations are not essential.
> however, if there is no swap space (i.e. embedded system w/o a hard drive)
> it might be a good idea to evaluate the above equation and adjust ram sizes
> accordingly (assuming your board vendor gives you options or you are
> spinning your own boards).

Thanks to all who replied on my post, now all is clear for me in the area "how 
NetBSD manages to load executables" :)

Looks like crunchgen and resulted crunched binary is a nice thing. Specially, 
if text sections are shared among different processes.

// wbr

Home | Main Index | Thread Index | Old Index