[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: select/poll optimization
David Laight <david%l8s.co.uk@localhost> wrote:
> Another unreadble diff .....
> It would be easier to show us the new function in its entirity.
> Or at least order the functions so that diff doesn't compare two
> different functions.
You can apply patch and read the patched source :)
> I'm not sure the collision case is worth optimising for. I've always
> liked the fact that our poll/select code manages not to have to link
> a per-caller memory block onto each driver area.
Why? In case of per-thread approach, it does not increase complexity.
> Although you save the 2nd selscan/pollscan call, you have to unlink
> the data blocks - which is arguably a more expensive operation - at
> the end of the call.
I am not sure what do you mean. Do you mean overhead of the locks in
sel_lwp_end function? Few preliminary benchmarks did not show significant
difference in comparison with per-CPU approach.
> You need to allow for more than one event being returned (for a single
> fd) since it can be a while before the process returns.
Why do you think it returns only one event?
By the way, looking now that sl->sl_mtx may be released before sel_lwp_end.
> Since the size of the array passed to poll() is unlimited, you are
> allowing a user process to allocate an unbounded amount of kernel
> memory. (There is a long-standing bug about the fact we bound the
> array to RLIMIT_NOFILE, in fact we should probably process the
> array in chunks.)
Yes, however it is an old problem. We may fix it with these changes.
> In the uncontested case the signalled event could be written into
> the per-device area - saving the driver code some work.
That is why selnotify was changed to include third 'events' argument.
However, provided patches do not include optimization to avoid calling
selscan/pollscan twice - this will come later, now we are discussing
> Haven't you added another mutex?
> The last version I (tried to) read relied on the driver mutex for
> one of the structures. This was acquired twice per fd.
> I think you acquire the driver mutex once, and another mutex twice.
Yes, unlike Andrew's patch, this invents sel_info_lock. We can do this
properly now with the changes to the drivers.
In this case we cannot rely on driver's locking, because selector LWP does
not have information about it. We get that via selinit(), but on the other
hand, if driver runs at >IPL_VM it may be more optimal to use separate
Main Index |
Thread Index |