Subject: Re: increase default for number of files descriptors per process (was Re: Speeding up "pstat -T")
To: Perry E. Metzger <email@example.com>
From: Andrew Brown <firstname.lastname@example.org>
Date: 10/05/2003 18:50:08
On Sun, Oct 05, 2003 at 02:09:06PM -0400, Perry E. Metzger wrote:
>Klaus Heinz <email@example.com> writes:
>> > (BTW, i wouldn't object to the default rlim_cur for open fd's being
>> > increased from 64 to 128.)
>> With many applications being developed mainly under Linux this seems to
>> be a good idea. Most Linux distributions (at least those available
>> on testdrive.hp.com) seem to have a limit of 1024.
>Perhaps we should set the limit to 1024, then.
>BTW, I've been wondering about a lot of our limits for some time. I
>think that perhaps our kernels should compute them at boot time based
>on available resources rather than forcing a "one size fits all" on
>everyone. A machine with 4G of physical memory probably can afford to
>let user processes have a bit of a larger default stack size than a
>machine with 4M of memory, right?
(1) that sounds reasonable, but in practice, how many applications do
you use that actually run up against the default soft limit of two
(2) i have plans to eliminate the hard limit entirely (at least after
an exec), so raising MAXSSIZ depending on memory detected at boot may
also become a moot point.
|-----< "CODE WARRIOR" >-----|
firstname.lastname@example.org * "ah! i see you have the internet
email@example.com (Andrew Brown) that goes *ping*!"
firstname.lastname@example.org * "information is power -- share the wealth."