tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: assertion "spc->spc_migrating == NULL" failed



yamt%mwd.biglobe.ne.jp@localhost (YAMAMOTO Takashi) wrote:
> > (The soft int may vary). Looking at the sources, I see that
> > sched_nextlwp() is carefull to not propose a new lwp if a migration is
> > in progress. But when this KASSERT fires we're not necesserely about to
> > switch to a new (non-idle) lwp, but the current lwp got woken up by
> > another CPU while it was about to switch.
> > 
> > Shouldn't
> >                         KASSERT(spc->spc_migrating == NULL);
> >                         if (l->l_target_cpu !=  NULL) { 
> >                                 spc->spc_migrating = l; 
> >                         }
> > be instead:
> >                         if (l->l_target_cpu !=  NULL) { 
> >                             KASSERT(spc->spc_migrating == NULL);
> >                                 spc->spc_migrating = l; 
> >                         }
> > 
> > I did the above change and it seems to work, can someone confirm this is
> > correct ?
> 
> i think you're correct.

Right, due to softint / kernel preemption that assertion is incorrect.
Manuel's suggested change as well, since l->l_target_cpu can be set
independently from spc->spc_migrating.

> i have the attached patch long-staying in my local tree.

I think there are more preemption points e.g. higher priority LWP with
l_target_cpu set may cause a kernel preemption before idle LWP processes
spc_migrating LWP.  There is no harm to reset spc_migrating in such case,
since previous spc_migrating-LWP will be processed on next dispatch.

How about the attached patch?

-- 
Mindaugas
Index: kern_synch.c
===================================================================
RCS file: /cvsroot/src/sys/kern/kern_synch.c,v
retrieving revision 1.284
diff -u -p -r1.284 kern_synch.c
--- kern_synch.c        2 Nov 2010 15:17:37 -0000       1.284
+++ kern_synch.c        24 Nov 2010 01:32:49 -0000
@@ -654,9 +654,13 @@ mi_switch(lwp_t *l)
                        l->l_stat = LSRUN;
                        lwp_setlock(l, spc->spc_mutex);
                        sched_enqueue(l, true);
-                       /* Handle migration case */
-                       KASSERT(spc->spc_migrating == NULL);
-                       if (l->l_target_cpu !=  NULL) {
+                       /*
+                        * Handle migration.  Note that "migrating LWP" may
+                        * be reset here, if interrupt/preemption happens
+                        * early in idle LWP.
+                        */
+                       if (l->l_target_cpu != NULL) {
+                               KASSERT((l->l_pflag & LP_INTR) == 0);
                                spc->spc_migrating = l;
                        }
                } else


Home | Main Index | Thread Index | Old Index