[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: kern/40419 (processor sets broken on 5.99.6)
The following reply was made to PR kern/40419; it has been noted by GNATS.
From: Mindaugas Rasiukevicius <rmind%netbsd.org@localhost>
To: Andrew Doran <ad%netbsd.org@localhost>
Cc: gnats-bugs%NetBSD.org@localhost, netbsd-bugs%netbsd.org@localhost,
Subject: Re: kern/40419 (processor sets broken on 5.99.6)
Date: Wed, 21 Jan 2009 10:17:14 +0000
Andrew Doran <ad%netbsd.org@localhost> wrote:
> > > I was thinking of a function that scans all threads, with cpu_lock
> > > held, and checks to see if their l_cpu is allowed by their affinity
> > > mask, processor set or LP_BOUND flag. If not, change l_cpu (or migrate
> > > if online), then do a broadcast xcall to nullop() if there have been
> > > migrations.
> > After some thinking, I do not think it is worth. Theoretically,
> > xc_broadcast might still not ensure that all LWPs have migrated, eg. in a
> > case when there are many migrating LWPs in the same run-queue.
> Hmm. I can't look at the code right now. If we can have LWPs in the wrong
> runqueue after a pset/affinity change, we should move them to prevent them
> running on that CPU after there is a context switch. Maybe it would be
> useful to add a syncobj_t::sobj_changecpu()?
It would not migrate immediately only if LWP is in LSRUN (and if it's in the
runqueue) or LSONPROC state. For both cases LWP would migrate just after the
context-switch via idle loop (l_target_cpu != NULL).
If we really want to be synchronous about this, then a simple cv_wait() in
lwp_migrate() and cv_broadcast() in sched_idle() would work. Do you see some
good reason to do this?
Main Index |
Thread Index |