Subject: Re: Time to bump the default open files limit?
To: None <tech-kern@netbsd.org>
From: Christos Zoulas <christos@zoulas.com>
List: tech-kern
Date: 06/20/2002 21:37:06
In article <20020620140508.E1187@dr-evil.shagadelic.org>,
Jason R Thorpe <thorpej@wasabisystems.com> wrote:
>I just spent a couple of hours tracking down a weird Kerberos problem
>that turned out to be caused by adding a subnet to my network, thus
>bumping the number of file descriptors that KDC has to open to > 64.
>
>The default open files limit is 64.
>
>Needless to say, this made using Kerberos kind of difficult.
>
>Is there any reason why this number is still so small?  It's not like
>this server is huge ... it just happens to be listening on IPv4 and IPv6
>on a few vlans.
>
>Would bumping the default limit to 128 make sense?

I think that from the POV that most programs use very few file descriptors
and a few programs use a lot of file descriptors, we should leave things
as they are and make the few programs that need more descriptors bump their
resource limits. We should also fix the error handling code, so that it
does not take hours to figure out what went wrong. It is not like rocket
science. There is the POV that well, memory is cheap and changing the
number of file descriptors from 64 to 128 is just noise, but I recall
that there sun2's and vaxes out there.

#include <sys/resource.h>

#ifdef RLIMIT_NOFILE
	/*
	 * get rid of resource limit on file descriptors
	 */
	{
		struct rlimit rl;
		if (getrlimit(RLIMIT_NOFILE, &rl) != -1 &&
		    rl.rlim_cur != rl.rlim_max) {
			rl.rlim_cur = rl.rlim_max;
			(void) setrlimit(RLIMIT_NOFILE, &rl);
		}
	}
#endif

christos