Subject: Re: Time to bump the default open files limit?
To: Jason R Thorpe <thorpej@wasabisystems.com>
From: David Laight <david@l8s.co.uk>
List: tech-kern
Date: 06/22/2002 13:58:46
On Thu, Jun 20, 2002 at 02:05:08PM -0700, Jason R Thorpe wrote:
> I just spent a couple of hours tracking down a weird Kerberos problem
> that turned out to be caused by adding a subnet to my network, thus
> bumping the number of file descriptors that KDC has to open to > 64.
> 
> The default open files limit is 64.
> 
> Needless to say, this made using Kerberos kind of difficult.
> 
> Is there any reason why this number is still so small?  It's not like
> this server is huge ... it just happens to be listening on IPv4 and IPv6
> on a few vlans.
> 
> Would bumping the default limit to 128 make sense?

I've just grovelled through a load of kernel code and have a few
suggestions.

Firstly note that both the per-process limit and the kernel limit
are 'soft' - ie they can be increased at run time.
It also costs nothing to have a high limit.

I'd suggest the following:

1) Increase the default per process limit to (say) 256
   (just in case anything stores the number in a byte)

(my mozilla has 34/64, X 21/64 and xsm 18/64 - all could easily
run out)

2) fix rlimit.descriptors.hard at something sensible - 1024.
   it is currently kern.maxfiles - remember ulimit -n value
   sets both the soft and hard limits (for no good reason).

3) Allow the kernel limit to be exceeded by:
   - root
   - processes with less than (say) 20 open files
   At least until the number of open files is less
   than double kern.maxfiles - when maybe you allow
   root processes with a small number of open files
   to continue working.

OTOH why not kill kern.maxfiles altogether?
Leave it just as an upper boound for the number
of file descriptors per process.

Additionally maybe more than one process slot ought to be
reserved for 'root' - otherwise root can't log in and
look at anything.

	David

-- 
David Laight: david@l8s.co.uk