Subject: Re: alternate rough SA patch (works on SMP)
To: Stephan Uphoff <firstname.lastname@example.org>
From: Christian Limpach <email@example.com>
Date: 07/01/2003 17:55:14
Quoting Stephan Uphoff <firstname.lastname@example.org>:
> A working scheduler balancing per CPU run queues while trying to
> improve locality would be highly desirable.
> ( CPU affinity masks would also be great )
I have implemented CPU affinity masks now. The per CPU run queues didn't
prove flexible enough. It also solves the starvation possibility you
I have now one set of run queues again and a per CPU sched_whichqs. Each
lwp has an l_cpumask and cpu_switch checks the mask when considering an
lwp. It's possible to update the mask while the lwp is on the runqueue.
What would the criteria for locality be? Would it be reasonable to
evaluate the criteria at setrunqueue time? At cpu_switch? Or
asynchronously? I guess it also depends on the criteria.
> Any fix for the low memory UP problems will probably automatically
> fix the SMP problems since they are closely related.
> (At least this is how it worked out with my patch)
> My feeling is that you started from the wrong end of the problem.
hmm, isn't one SMP-specific problem:
- selwakeup puts all lwp's on the run queue
- on an idle system two lwp's start running concurrently
- they both try to grab the vp and want to do SA bookkeeping
It's my understanding that your patch deals with this by putting the lwp's
which don't get on the vp back to sleep in your sa_vp_repossess and then
run them sequentially from sa_vp_donate.
What I'd like to do is prevent the lwp's from running concurrently.
Are there other solutions? Isn't it either react to the problem or prevent
Christian Limpach <email@example.com>