Subject: Re: Further works on yamt-idlelwp
To: Andrew Doran <ad@netbsd.org>
From: None <jonathan@dsg.stanford.edu>
List: tech-kern
Date: 03/05/2007 17:54:48
In message <20070305165043.GI21850@hairylemon.org>, Andrew Doran writes
>On Mon, Mar 05, 2007 at 02:27:16AM +0200, Mindaugas R. wrote:
>> 1. Currently there are two general functions sched_lock() and sched_unlock()
>> for runqueue (and all the scheduler) locking. From now, one is going to use a
>> runqueues per CPU, hence this should be changed.
>> a) Add a kmutex_t in struct cpu_data (there is a general MI data) and it
>> would be a generic lock for runqueue.
>> b) Add kmutex_t in scheduler-specific area and move sched_lock/sched_unlock
>> to scheduler module. This would be more flexible, IMHO.
>> In any case, prototype would probably change to:
>> static inline void sched_lock (struct cpu_info *ci, const intheldmutex);
>> Any other suggestions?
>
>I don't like the idea of having two locks per cpu, it complicates things
>somewhat and increases overhead. Is there a particular reason you want to do
>that? One change I have but have made but not checked in is to rename
>sched_lock/unlock to spc_lock/unlock, and add a cpu_info argument as you
>mention. The idea being that there would be very little remaining global
>state if any - meaning no global run queue.
If there's no global run queue at all, how do you rebalance across
CPUs? A textbook example might be K CPus, and K+1 long-running
compute-bound jobs (threads, processes, whatever).
With a fair scheduler, all jobs make progress at the same long-term
rate. I don't see how to do that without occasional rebalancing.
(let's leave affinity out of it for now.)
Do you have some other scheme in mind?