Subject: Re: increase default for number of files descriptors per process (was Re: Speeding up "pstat -T")
To: <>
From: David Laight <david@l8s.co.uk>
List: tech-kern
Date: 10/06/2003 18:33:43
> A machine with 4G of physical memory probably can afford to
> let user processes have a bit of a larger default stack size than a
> machine with 4M of memory, right?
Why should the amount of physical memory have any effect on the
default stack size limit of a process? If anything the amount of
virtual memory should be used, but even that isn't really relevant
(unless it is very small).
A 'large' system might be used to run one big program, or hundreds
of small ones. The kernel can't really tell.
As I've said before, IMHO the rlimit.rlim_max values should be properties
of the installation, not of the machine it is installed on.
My system (256MB memory, 1.6W GENERIC kernel) gives:
proc.curproc.rlimit.memoryuse.soft = 254296064
proc.curproc.rlimit.memoryuse.hard = 254296064
proc.curproc.rlimit.memorylocked.soft = 84765354
proc.curproc.rlimit.memorylocked.hard = 254296064
proc.curproc.rlimit.maxproc.soft = 160
proc.curproc.rlimit.maxproc.hard = 532
proc.curproc.rlimit.descriptors.soft = 64
proc.curproc.rlimit.descriptors.hard = 1772
which seems to have mixed up the 'amount of resource avaialble'
with the 'amount of resource we are willing to give a single entity'.
With these values any user can use up:
a) (almost) all the processes
b) all the 'file' structures
c) all the physical memory
I'm not sure, but I suspect that some of the 'hard' limits should
actually be set to match the current 'soft' limits.
After all very few programs actually try to increase the soft limits
(except for descriptors).
David
--
David Laight: david@l8s.co.uk