Subject: kern/14721: It's possible to crash the system by opening a large number of files
To: None <gnats-bugs@gnats.netbsd.org>
From: None <manu@netbsd.org>
List: netbsd-bugs
Date: 11/25/2001 11:19:36
>Number:         14721
>Category:       kern
>Synopsis:       It's possible to crash the system by opening a large number of files
>Confidential:   no
>Severity:       critical
>Priority:       medium
>Responsible:    kern-bug-people
>State:          open
>Class:          sw-bug
>Submitter-Id:   net
>Arrival-Date:   Sun Nov 25 11:20:00 PST 2001
>Closed-Date:
>Last-Modified:
>Originator:     Emmanuel Dreyfus
>Release:        NetBSD-current
>Organization:
The NetBSD Project
>Environment:
NetBSD plume 1.5Y NetBSD 1.5Y (IRIX) #13: Sun Nov 25 15:20:49 CET 2001     manu@plume:/cvs/src/sys/arch/sgimips/compile/IRIX sgimips
>Description:
It seems it's possible to set kern.maxfiles to an arbitrary value.
Provided that the process limit on open file descriptor is set high
enough using ulimit, it's then possible to panic the kernel by opening 
a large number of files.
>How-To-Repeat:
#sysctl -w kern.maxfiles=900000000
kern.maxfiles: 1772 -> 900000000
#ulimit -n 900000000               
#cat > fdcount.c
#include <stdio.h>
#include <unistd.h>
#include <fcntl.h>

int 
main (argc, argv)
        int argc;
        char **argv;
{
        int i, fd, count;
        
        fd = open("/dev/null", O_RDONLY, 0);
        for (count = 1; open("/dev/null", O_RDONLY, 0) != -1; count++);
        for (i = fd; i < count; close(i++));
        printf("I opened %d descriptors\n", count);

        return 0;
}
#cc -o fdcount fdcount.c
#./fdcount
panic: malloc: out of space in kmem_map
Stopped in pid 222 (fdcount) at 0x880f84c4:     jr      ra
                bdslot: nop
>Fix:
I'm not sure about where the bug is. As I understand, kern.maxfiles is
just a limit for allocation of space for file descriptors. The space is
actually allocated when we want ot use it. 

It seems that the code that allocates memory for a new file descriptor
is not able to report ENOMEM to the calling process, it panics instead.
I think this is a bug.

Beyond this, we should at least document somewhere that setting a 
kern.* or ulimit maximum does not actually garantee that the 
resource will be available when we will request it. Of course it would 
be better to ensure it but I'm not sure it could be done without eating 
a lot of kernel memory for nothing. Is there anything in the X/Open standards about the behaviour of ulimit? 
>Release-Note:
>Audit-Trail:
>Unformatted: