Subject: Re: Issue with large memory systems, and PPC overhead
To: Jolan Luff <firstname.lastname@example.org>
From: David Laight <email@example.com>
Date: 11/03/2002 16:15:38
On Sun, Nov 03, 2002 at 05:32:36AM -0600, Jolan Luff wrote:
> On Sun, Nov 03, 2002 at 11:21:54AM +0100, Jaromir Dolecek wrote:
> > > maxusers set to 256; my maxproc is 4116 (256 * 16 + 20). However, I cannot
> > Did you update your process limits? On my computer, the defaults
> > (as listed by ulimit in ksh) are:
> > processes 160
> maxproc = processes
Which is IMHO particularly stupid!
Even the 'hard' maximum values should be set to some sensible limit,
not to the system limit of the value.
Whether 'unlimited' should be reported as the actual system limit
is (possibly) open to question - but I think not.
I also think that fork() should report EAGAIN if there isn't enough
resource (typically memory (or swap)) to honour the request,
rather than because some preconceived system limit has been hit.
Disposing of MAXUSERS and NPROC (and a few other kernel build
constants isn't that difficult).
Note that the callout code has an archaic dependency on NPROC
that should be removed. If anything the table size should
depend on HZ.
David Laight: firstname.lastname@example.org