Subject: Re: Moving scheduler semantics from cpu_switch() to kern_synch.c
To: Daniel Sieger <dsieger@TechFak.Uni-Bielefeld.DE>
From: Matt Thomas <>
List: tech-kern
Date: 09/19/2006 07:53:30
On Sep 18, 2006, at 3:23 PM, Daniel Sieger wrote:

> Hi,
> the attached two diffs move some of the scheduler semantics from
> cpu_switch() to kern_synch.c. The function nextlwp() gets called from
> mi_switch() if its second argument is NULL. It returns the next
> runnable LWP from the highest priority runqueue or calls cpu_idle()
> if all queues are empty. Seems to work fine on i386 (I'm sorry, but I
> have no other archs to test it). Any suggestions/comments are more
> than welcome.
> Regards,
> Daniel

kern_synch.c should not have a cpu_idle() implementation since that's
MD code.  You don't seem to deal with __HAVE_BIGENDIAN_BITOPS.

I'd rather cpu_idle() do the looping.  Shouldn't cpu_idle() also
unset curlwp?  If so, I'm not sure I like returning back to the
scheduler on a idle stack with no curlwp.  Instead cpu_idle() should
I think we should add a member to cpu_info which indicates cpu_idle
should continue to loop.  When nonzero, it represents that there may
be a new lwp to be run.

void cpu_idle(struct lwp (*getnext)()) __attribute__((__noreturn__));

cpu_idle would call back its first argument to get the next lwp to
run.  When it returns new lwp, it will have already been removed
from the runqueue.

cpu_switch should die; either we call cpu_switchto or cpu_idle.