tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Reviving SA: what is up with preempt() generating BLOCKED upcalls?

As part of reviving SA, I'm re-adding all of the kernel infrastructure we 
ripped out.

In doing this, I looked at re-adding the preempt(int more) code we had.  
However, I have serious questions about it. Like why do it?

We call preempt() when we want to give the scheduler a convenient moment 
to run something else. In -current, we do this at some times when we 
realize another thread/sub-system has requested we do this 
(curcpu()->ci_want_resched set for instance).

I believe we send a BLOCKED upcall to the process because this thread of
execution has stopped. The thing however is we send the upcall when
mi_switch() returns (and in the case when we actually ran a different
thread). So we send the BLOCKED upcall _after_ we were blocked, not
before/durring like we do for blocking. Among other things, we send the
BLOCKED upcall after we not only hop away, but after we hop back. So we 
have in effect UNBLOCKED at the time when we send the BLOCKED upcall.

Further, we aren't blocked. I understand the BLOCKED upcall to be a way 
for libpthread to schedule something else on the virtual CPU that we were 
running on. However a call to mi_switch() does not represent a point at 
which libpthread can schedule something else on our virtual CPU - our 
thread is still runable in the eyes of the kernel scheduler, so it will 
hop back to it when it decides to. And we generate an UNBLOCKED upcall 
when we get back to userland. Most importantly, we don't offer a new lwp 
to userland on which it can run threads.

So the net effect will be we pull the user thread that triggered code that 
blocked off of the (front of the) run queue, put it on the back of the run 
queue, then go back to running.

I note that there are/were only two places in the code that call 
preempt(0), sys_sched_yield() and midiwrite(). I really don't understand 
why midiwrite() calls preempt(0) other than probably the fact that chap 
wasn't sure what to do when he added the code in 2006.

I think sched_yield is the one place where we would want to do this 
rescheduling. However I think it's much cleaner on the whole kernel to 
just have sched_yield just manually inject a BLOCKED upcall. When we get 
back to userland we will auto-generate an unblocked upcall too, so we will 
rattle the userland run queues.

Intercepting sched_yield() in userland is a better way to go, but this 
branch is about compatability at this point. :-)

I realize I have been somewhat thinking out loud; I now better understand 
the problem than I did when i started writing this EMail. However anyone 
have any thoughts on this?

Take care,


Attachment: pgpL0yMI8xDHJ.pgp
Description: PGP signature

Home | Main Index | Thread Index | Old Index