Subject: Re: Time to bump the default open files limit?
To: Greywolf <greywolf@starwolf.com>
From: Greg A. Woods <woods@weird.com>
List: tech-kern
Date: 06/22/2002 00:02:17
[ On Friday, June 21, 2002 at 16:16:57 (-0700), Greywolf wrote: ]
> Subject: Re: Time to bump the default open files limit? 
>
> On Fri, 21 Jun 2002, Greg A. Woods wrote:
> # 
> # It's not that kind of a soft limit though -- I think you have to change
> # OPEN_MAX if you're going to bump the default value for ulimit(nofiles).
> 
> Default, yes; highest allowed, no.
> 
> $ ulimit -n
> 64
> $ ulimit -n 128
> $ ulimit -n
> 128
> $
> 
> So the process can actually self-adjust its ulimit(nofiles).

Indeed it can -- which was, I believe, the whole point behind one of the
very first answers to Jason in this thread.

But there is no "highest allowed" hard-limit in this case -- it's a
matter of how big the file table currently is in the kernel, and that's
a system wide parameter that only root can change, and it can't be
changed with the shell's 'ulimit' command, not for any process, nor for
all processes.

Jason's problem would only have been solved (without code changes,
AFAICT) by increasing OPEN_MAX and rebuilding the world.

Jonathan's problem required increasing MAXFILES and rebuilding his
kernel, or at least increasing kern.maxfiles before the kernel spewed
the following message on his console:

	file: table is full - increase kern.maxfiles or MAXFILES

IMNSHO neither problem is worth bumping OPEN_MAX or making any other
changes in the kernel for that matter.  Many user-level programs though
may require better error handling for both EMFILE and ENFILE error
conditions.

(though I suppose a message to /dev/klog at LOG_NOTICE level might make
sense so that debugging EMFILE conditions a little easier.)

-- 
								Greg A. Woods

+1 416 218-0098;  <gwoods@acm.org>;  <g.a.woods@ieee.org>;  <woods@robohack.ca>
Planix, Inc. <woods@planix.com>; VE3TCP; Secrets of the Weird <woods@weird.com>