Subject: Re: poll(2) oddity
From: David Laight <email@example.com>
Date: 07/08/2002 14:30:28
> Seriously - if the array is big enough (doesn't matter if sparse
> or not), the memory copy of poll array from userland to kernel
> space does take majority of time spent in poll(2) call. This is
> the one of reasons why kqueue was implemented, and why e.g. Sun
> /dev/poll exists.
Nope - I think I know who implemented that for Sun.
The actual problem is that poll is O(n) in the number of fds
(and the user copy is nothing like the largest part of that).
So if you have a process that has (say) 1000 fd's in the poll
list, performing a single action involves traversing all the
requested files (and if Solaris has the same poll() implementation
as SVR4, linking a small data area onto each one) for every event.
So actioning one event on each of 'n' files becomes O(N^2)
- or O(n^3) if the kernel is using a linked list of blocks of
fd number -> file ptr structures.
> OPEN_MAX is compile time constant. sysconf(_SC_OPEN_MAX) is different
> story :) If you reference OPEN_MAX in your code, you get '64' currently
> always - see <sys/syslimits.h>.
Yes - but the OPEN_MAX referred to in the SuS poll() page is tagged
to mean its dynamic equivalent.
> Is it really true that close() returns error for the high-numbered
> fds once you lower rlimit(NOFILES)? I seriously doubt that. If
> this is true, it's a bug and should be fixed - can you send-pr it,
No, that isn't what I meant. Some programs (eg any sensible shell)
for (getrlimit( RL_NOFILE, &fdlim); --fdlim.rlim_cur > 2)
close( fdlim.rlim_cur );
in order not to pass any exteranous files onto its children.
David Laight: firstname.lastname@example.org