Subject: Re: Time to bump the default open files limit?
To: <>
From: David Laight <david@l8s.co.uk>
List: tech-kern
Date: 06/21/2002 22:04:05
>
> You realise that 64 is a soft limit; most processes are given reign to
> adjust that upward to whatever the hard limit is. At startup time,
> wherever it makes most sense, all one needs to do is up the soft limit
> on a per-case basis.
>
> I don't know of too many machines (yeahyeahyeah, "just because you don't
> know of them doesn't mean they don't exist yada yada yada.") that only
> allow 64 (or fewer) descriptors as a hard limit; almost certainly, if
> they do, they're probably not suited for the purpose of being, e.g., a
> KDC.
Isn't the 'hard' limit also 'soft' - in the sense that 'root'
can increase it almost without limit.
Although if ulimit(nofiles) is very large, programs (eg shells)
that want to close all fds above a certain number when entered
(typically to avoid possibly security problems) have difficulty
getting a sensible upper bound for the number of fds to close.
- they typically use the ulimit value.
Of course it is possible to have an open file that is above
the current ulimit value.........
The other problem that hit a certain os, what that the kernel
used a linked list (of blocks of about 20 or 32 entries each)
in order to file the 'file' structure from the fd number.
For large numbers of files this operation is O(n), then
functions like poll() become O(n^2) and getting a poll list
that long becomes O(n^3) - been there!
(I've not looked how netbsd does it)
David
--
David Laight: david@l8s.co.uk