Subject: kern/14721: It's possible to crash the system by opening a large number of files
To: None <firstname.lastname@example.org>
From: None <email@example.com>
Date: 11/25/2001 11:19:36
>Synopsis: It's possible to crash the system by opening a large number of files
>Arrival-Date: Sun Nov 25 11:20:00 PST 2001
>Originator: Emmanuel Dreyfus
The NetBSD Project
NetBSD plume 1.5Y NetBSD 1.5Y (IRIX) #13: Sun Nov 25 15:20:49 CET 2001 manu@plume:/cvs/src/sys/arch/sgimips/compile/IRIX sgimips
It seems it's possible to set kern.maxfiles to an arbitrary value.
Provided that the process limit on open file descriptor is set high
enough using ulimit, it's then possible to panic the kernel by opening
a large number of files.
#sysctl -w kern.maxfiles=900000000
kern.maxfiles: 1772 -> 900000000
#ulimit -n 900000000
#cat > fdcount.c
main (argc, argv)
int i, fd, count;
fd = open("/dev/null", O_RDONLY, 0);
for (count = 1; open("/dev/null", O_RDONLY, 0) != -1; count++);
for (i = fd; i < count; close(i++));
printf("I opened %d descriptors\n", count);
#cc -o fdcount fdcount.c
panic: malloc: out of space in kmem_map
Stopped in pid 222 (fdcount) at 0x880f84c4: jr ra
I'm not sure about where the bug is. As I understand, kern.maxfiles is
just a limit for allocation of space for file descriptors. The space is
actually allocated when we want ot use it.
It seems that the code that allocates memory for a new file descriptor
is not able to report ENOMEM to the calling process, it panics instead.
I think this is a bug.
Beyond this, we should at least document somewhere that setting a
kern.* or ulimit maximum does not actually garantee that the
resource will be available when we will request it. Of course it would
be better to ensure it but I'm not sure it could be done without eating
a lot of kernel memory for nothing. Is there anything in the X/Open standards about the behaviour of ulimit?