[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Reviving SA: what is up with preempt() generating BLOCKED upcalls?
On May 10, 2008, at 9:40 PM, Bill Stouder-Studenmund wrote:
As part of reviving SA, I'm re-adding all of the kernel
In doing this, I looked at re-adding the preempt(int more) code we
However, I have serious questions about it. Like why do it?
We call preempt() when we want to give the scheduler a convenient
to run something else. In -current, we do this at some times when we
realize another thread/sub-system has requested we do this
(curcpu()->ci_want_resched set for instance).
I believe we send a BLOCKED upcall to the process because this
execution has stopped. The thing however is we send the upcall when
mi_switch() returns (and in the case when we actually ran a different
thread). So we send the BLOCKED upcall _after_ we were blocked, not
before/durring like we do for blocking. Among other things, we send
BLOCKED upcall after we not only hop away, but after we hop back.
have in effect UNBLOCKED at the time when we send the BLOCKED upcall.
Further, we aren't blocked. I understand the BLOCKED upcall to be a
for libpthread to schedule something else on the virtual CPU that
running on. However a call to mi_switch() does not represent a
which libpthread can schedule something else on our virtual CPU - our
thread is still runable in the eyes of the kernel scheduler, so it
hop back to it when it decides to. And we generate an UNBLOCKED upcall
when we get back to userland. Most importantly, we don't offer a
to userland on which it can run threads.
So the net effect will be we pull the user thread that triggered
blocked off of the (front of the) run queue, put it on the back of
queue, then go back to running.
I note that there are/were only two places in the code that call
preempt(0), sys_sched_yield() and midiwrite(). I really don't
why midiwrite() calls preempt(0) other than probably the fact that
wasn't sure what to do when he added the code in 2006.
I think sched_yield is the one place where we would want to do this
rescheduling. However I think it's much cleaner on the whole kernel to
just have sched_yield just manually inject a BLOCKED upcall. When
back to userland we will auto-generate an unblocked upcall too, so
rattle the userland run queues.
Intercepting sched_yield() in userland is a better way to go, but this
branch is about compatability at this point. :-)
I realize I have been somewhat thinking out loud; I now better
the problem than I did when i started writing this EMail. However
have any thoughts on this?
You went from LSONPROC to LSRUN and that's shouldn't cause an BLOCKED
I think a BLOCKED upcall should be sent iff the state changed to
I also think if a change to LSSTOP or LSSUSPENDED happens an upcall
not be sent since an external party want the lwp/proc to stop
switching to a new one to continue execution just seems wrong.
Main Index |
Thread Index |