Subject: Re: disk partition size
To: Robert.V.Baron <email@example.com>
From: Chief Anarchic Officer <firstname.lastname@example.org>
Date: 12/07/1997 00:01:24
* 1. I think that most people do not want crash dumps. (we need a
I want crash dumps but cannot get them because, on my system,
sizeof(physmem) > sizeof(first_swap_partition). But your thought is wrong
in my case, at least. Please don't assume (you know what THAT does :-).
* 2. I think that a 32Meg root and a 120 Meg usr work for most systems
32MB root is probably overkill (it is for me, anyway). 120MB /usr is
likely a bit small (mine's running out at 140MB). Someone
commented about not having each subsystem on its own disk. I'll leave
it to everyone else to comment on why such an arrangement is/is not wise
[I _like_ different mount points, where possible, for /var, /usr/src,
/usr/X11, /var/spool/news if you're a news server, and /users (or /u,
/home, /usr1 (ew!), /whatever)], except that historically for performance
reasons it was desirable to have / and /usr not only separate filesystems,
but separate disk spindles and, if possible, separate disk controllers
to maximize bandwidth.
[Keeping a spare set of bootblocks and a spare kernel on /usr is helpful
as well, even if you don't intend to use it as your root device.]
* 3. Given 2. you have a large usr1.
Even not given 2, you have a large spare space available for use.
* 4. If you need to dump, just make usr1 a critical fs (it gets mounted
* early) and have crash link to it. This would seem to be a good
If you need to dump, we should only be dumping actually used pages,
not the entirety of core. My system crashes infrequently, and I'm usually
the only user on it and not consuming much memory at all. I don't think
I've had to swap since I started using NetBSD/sparc 1.1.
* 5. swap is funny. I buy memory not to swap period; not to swap larger
* programs. I think that there should be some limit on swap to 64Meg
* so large systems don't get large swap.
Default setup, IIRC, for older systems, was:
sizeof(swap) >= sizeof(physmem) * 1.1.
The reason was that the old (4.2BSD) paging system did backing store
on physmem to ensure fast and reliable swap, but what it used for physmem
was entirely dependent upon what it used for swap; if you had 16M present
in the system but only had 8M swap, it would only use 8M memory.
The extra 10% was for, I think, performance reasons, following the same
logic as the minfree on a filesystem.
A reasonable minimum these days seems to be 1.5 * sizeof(physmem);
for each "heavy" server process you run (X, NFS, etc.), add about 25%.
[DNS seems to be a reasonably light load; I'd add 10% for that.]
You hit the point of diminishing returns once RAM gets up around 512MB,
at which point you revert to the 10% rule, just to be safe (so someone
showed me -- it doesn't make sense to dedicate a gig to swap space for
a half gig of core).
Just my two longwords worth,
BSD -> Solaris: It could be worse.
UNIX -> NT: It's worse.