tech-userlevel archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Reducing libc size

On 29-Apr-08, at 4:09 AM, Mikko Rapeli wrote:

On Tue, Apr 29, 2008 at 08:02:47AM +0100, David Laight wrote:
No, if you run multiple copies of a static binary the code and readonly data are shared between the processes, any data is shared copy-on- write.

The crunched programs are slightly smaller (since they don't need a PIC libc), but the main benefit is the linker selecting the required parts
of libc for you.

Hmm, I think I got this now. Is there a way I could easily show this
memory saving behaviour in numbers?

Before I started hacking with libc I thought about measuring the memory
consumption, but ended up looking at the /lib/ file size
since it was so easy.

Well, that's a gross approximation, but what you really want to pay close attention to are the results reported by the "size" command. You can examine the code (text) and data (data and bss) requirements of each .o, .so, or executable program to see exactly what's using what.

Here 140k sized sh's are dynamic /bin/sh's and 6868k sized sh's are
static /rescue/sh's. I suppose SIZE is not important, but does RES contain the copy-on-write pages shared by a number of processes? If it does the per
process memory usage is not relevat, and I should be looking the total
memory used. Correct?

Indeed the final memory utilization (and paging activity) of the _running_ application(s) are the ultimate indication of how successful your efforts at managing program size are.

Note that the total process size is indeed important, though it can be misleading because many programs make very poor use of the total memory they allocate, often resulting in many pages never being touched. However if a program does eventually, or occasionally, touch every page of memory allocated to it then eventually each of those pages will have to be made resident in memory for things to work. Note that assumes not only is every data structure accessed in some way, but also pretty much every line of code must be executed as well. I.e. only in the best of circumstances will all data pages of a process remain in memory for the entire duration of its execution, and of course only one instruction from each page of code really need be executed to require the whole page be loaded into RAM.

The number of resident pages is more indicative of how much real memory is required, but of course this number is only valid _while_ the program(s) are doing their normal _active_ workload. I.e. when you've stopped them, either with ^Z or by putting them in the background when they are idle and waiting for input, then all your results can quickly become bogus.

                                        Greg A. Woods; Planix, Inc.

Home | Main Index | Thread Index | Old Index