Subject: Re: sched_m2 is too unfair
To: None <rmind@NetBSD.org>
From: YAMAMOTO Takashi <yamt@mwd.biglobe.ne.jp>
List: netbsd-bugs
Date: 10/31/2007 15:11:59
> > >Number: 37245
> > >Category: kern
> > >Synopsis: sched_m2 is too unfair
> > <...>
> > see the following test program and an output of top.
> > there seems to be two problems, at least:
> >
> > - cpu-hogging threads never moves between cpus unless
> > there are idle cpus.
>
> Yes, this case is known. The problem is with CPU-hogging threads, which
> never sleeps - they dance only with sched_dequeue/sched_enqueue via
> preempt/mi_switch, thus never gets sched_takecpu. There are few more cases:
> - Yielding;
> - STOPPED -> RUN or SUSPENDED -> RUN transitions;
> For the first case, I am thinking to perform sched_takecpu() in setrunnable().
>
> > - balance cpus periodically.
>
> I was thinking about calling sched_takecpu() in preempt() when that is
> necessary, according to the data collected by sched_balance().
> Did you meant something else?
i meant to make a periodic investigator (like sched_balance) moves lwps
when it detects imbalance.
sprinkling sched_takecpu calls should work as well.
> > - there are threads which get completely starved.
> > i guess they have never got run after fork and their sl_lrtime
> > are still 0.
> > <...>
> > - make sched_enqueue initialize l_lrtime properly for new lwps.
>
> I think changing the else case in sched_enqueue to this would be correct:
> ...
> } else if (sil->sl_lrtime == 0)
> sil->sl_lrtime = hardclock_ticks;
i think it works.
YAMAMOTO Takashi