Subject: re: Moving scheduler semantics from cpu_switch() to kern_synch.c
To: matthew green <mrg@eterna.com.au>
From: Eduardo Horvath <eeh@NetBSD.org>
List: tech-kern
Date: 09/21/2006 22:00:16
On Fri, 22 Sep 2006, matthew green wrote:

>    > it just seems suboptimal to have to set a flag in every cpu_info when
>    > there is a (random) process to run.
>    
>    You would not set it in every cpu_info... the idea is that processes  
>    would be "bound" to CPUs to eliminate the cache thrash that we  
>    currently have because processes can migrate between CPUs randomly.   
>    We're talking about per-CPU run queues, here.
> 
> 
> so every process is bound to a cpu?  i guess i don't understand how
> this works to avoid cpus idling while lwps are waiting for "their"
> cpu to become free...  who runs a new process first?  right now it
> is who ever tries to first.

ISTR the Solaris implemntation has a set of run queues per CPU.  When
the CPU wants to run something it pops it off its own run queues.  If
its run queues is completely empty, it will look at the CPUs next door
and see if they have anything runnable.  There's a bit more to it, but
that's the general idea.

Eduardo