Subject: Re: kern/14721: It's possible to crash the system by opening a large number of files
To: , <tech-kern@netbsd.org>
From: David Laight <David.Laight@btinternet.com>
List: tech-kern
Date: 11/25/2001 20:02:15
> Subject: kern/14721: It's possible to crash the system by opening
> a large number of files


> It seems that the code that allocates memory for a new file descriptor
> is not able to report ENOMEM to the calling process, it panics instead.
> I think this is a bug.

On the grounds that you shouldn't be able to crash the system from user
space if nothing else.....
> 
> Beyond this, we should at least document somewhere that setting a 
> kern.* or ulimit maximum does not actually garantee that the 
> resource will be available when we will request it.

Nothing (I recall) in the X/Open spec stops open() (or t_open() or socket()
- which I've discussed at XNET meetings) from failing due to 'transient
lack of resource'.

IMHO it is better for the kernel to dymanically grab the required memory.
Probably sleeping if none is (currently) available.  This removes a
kernel define which it is (clearly) easy to misconfigure...

How does netbsd handle large numbers of fd per process?
SVR4 used a linked list of blocks of (about) 20 fds.  Unfortunately
this makes looking up an fd O(n) in the fd number - making poll() O(n^2)
and getting a large number of connections into a single process listener
O(n^3).  And yes - I have seen this being the limiting factor! (poll on
over 1000 fds)

    David