NetBSD-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Postfix and local mail delivery - still relevant in 2020?



At Sun, 7 Jun 2020 20:19:59 +0100, Sad Clouds <cryintothebluesky%gmail.com@localhost> wrote:
Subject: Re: Postfix and local mail delivery - still relevant in 2020?
>
> On Sun, 07 Jun 2020 10:35:09 -0700
> "Greg A. Woods" <woods%planix.com@localhost> wrote:
>
> > Now ideally what I want to do for embedded systems is static-link
> > every binary into one crunchgen binary.  I've done this for 5.2 on
> > i386, and the whole base system (or most of it, no compiler, tests,
> > or x11; and no ntpd or named or postfix) is just 7.4MB compressed.
> > Yes, just 7.4 megabytes:
>
> So when you run this binary as multiple concurrent processes, aren't
> they all going to use more RAM as they load duplicate copies of shared
> libraries that have been statically linked into it?

No, as this is one static-linked binary.  There are no shared libaries.

Also "shared" libraries are a bit of a misnomer.

Yes, their pages are shared, _but_ some of the most expensive work in
using them must be done for _every_ exec (at least in NetBSD), even
repeated execs of the same program.

Meanwhile _all_ text pages are also shared for each binary.  I.e. NetBSD
demand-pages text pages from program files.  So all processes running,
say, /bin/sh will only have one shared set of text pages in use between
them all (plus one set of pages for each shared library), no matter how
many /bin/sh processes there are running at any one time.

Now if you static-link /bin/sh then you can start a new instance without
ever having to go to disk to run it, and also without first having to
run the dynamic linker.  Individual static-linked programs start very
fast, especially if another instance is already running -- once the
process is set up it's literally just jumping to code already in memory
and getting right to work immediately.

However when you put _all_ the code for _all_ the system's programs into
one single lone binary, with no shared libraries, then _all_ text pages
are shared entirely for _all_ processes all of the time, no matter what
program they are running as.

The time it takes to exec and start a new unique program (one not yet
running) is now just the time it takes to maybe load the one page of
code containing its main() function.  As it branches out and runs more
of itself then it might need to load a few more unique pages, but as it
makes library calls it will likely be jumping to code that some other
process already loaded.

No matter how many of those 274 programs that I put into that one 10MB
binary you run, and no matter how many instances of each you run, there
will NEVER EVER be more than 10MB of text pages in memory for _all_
processes simultaneously.  10MB total.  For the whole base system.  The
whole base system is really just 10MB of code, total.  Yes each process
will allocate more pages for data and stack, but those are just memory
allocations, never paging in from disk.

--
					Greg A. Woods <gwoods%acm.org@localhost>

Kelowna, BC     +1 250 762-7675           RoboHack <woods%robohack.ca@localhost>
Planix, Inc. <woods%planix.com@localhost>     Avoncote Farms <woods%avoncote.ca@localhost>

Attachment: pgpfPDH1CPjMv.pgp
Description: OpenPGP Digital Signature



Home | Main Index | Thread Index | Old Index