Subject: Re: default maximum number of open file descriptors too small?
To: None <tech-kern@NetBSD.ORG>
From: der Mouse <mouse@Collatz.McRCIM.McGill.EDU>
List: tech-kern
Date: 11/29/1995 16:42:13
I'm not sure why this is going to tech-kern; it looks as though it
really overlaps both tech-kern and tech-userlevel, though I really
don't see any need to change the kernel interface.

> #define	FD_ZERO(p) 	{ \
> 	(p).fd_size = FD_SETSIZE; \
> 	(p).fds_bits = calloc(howmany((p).fd_size, NFDBITS), (p).fd_size); \
> }

Yeesh!  If FD_ZERO mallocs, a whole _lot_ of code will leak memory like
mad.  I've seen, and written, a good deal of code that reads like

	while (...)
	 { FD_ZERO(...);
	   ...FD_SET(...) calls...
	   select(...)
	   ...FD_ISSET(...) tests...
	 }

I'd really hate to lose FD_SETSIZE bits of memory, plus malloc
overhead, every time around the loop.  I'd much prefer to just bump the
user-level FD_SETSIZE definition up to the maximum the kernel allows
and rebuild the world.

Quite aside from that, all these macros need a do ... while (0) around
them, so they turn into exactly one statement when followed by a
semicolon.

					der Mouse

			    mouse@collatz.mcrcim.mcgill.edu