Port-powerpc archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

README: Scheduler locking changes coming Very Very Soon



Hi folks...

This is a somewhat short-notice heads-up.  There are some locking
changes that I'll be committing Very Very Soon (possibly tonight)
that change the locking protocol for context switches.  This is
for multiprocessor support.

The new locking protocol is:

        - mi_switch() always called at splhigh and with the
          sched_lock held.

        - mi_switch() returns at splhigh and with the
          sched_lock not held.

        - cpu_switch() is always called at splhigh and with
          the sched_lock held.

        - cpu_switch() is responsible for releasing the
          sched_lock before returning.

        - Idle loop must release sched_lock before lowering
          spl and reacquire it after raising spl.

It's mostly quite simple, and I've made changes for most of the ports
already (well, I still have pc532, sparc, and vax to do, but those should
be easy).  I will NOT be making the changes to arm32, powerpc ports, and
sh3 ports ports because it wasn't terribly strightforward for me to do so.
PORTMASTERS -- YOU MUST MAKE THESE CHANGES.

Note that in the LOCKDEBUG case, your locore *MUST* manipulate the
sched_lock via two convenience functions: sched_lock_idle() and
sched_unlock_idle().  You must also adjust the assumptions made
about interrupts in your locore.  Actually, it's simpler now, because
it used to be that cpu_switch() could be called with interrupts blocked
or not blocked.

Attached below is the diff needed to make this all happen on the Alpha.
It's quite simple.  Once the MI changes hit the tree, please update your
ports ASAP, as they will be broken until you do.

Index: locore.s
===================================================================
RCS file: /cvsroot/syssrc/sys/arch/alpha/alpha/locore.s,v
retrieving revision 1.78
diff -c -r1.78 locore.s
*** locore.s    2000/07/19 14:00:24     1.78
--- locore.s    2000/08/18 01:06:14
***************
*** 68,73 ****
--- 68,74 ----
  
  #include "opt_ddb.h"
  #include "opt_multiprocessor.h"
+ #include "opt_lockdebug.h"
  #include "opt_compat_linux.h"
  
  #ifdef COMPAT_LINUX
***************
*** 781,793 ****
        /* Note: GET_CURPROC clobbers v0, t0, t8...t11. */
        GET_CURPROC
        stq     zero, 0(v0)                     /* curproc <- NULL for stats */
        mov     zero, a0                        /* enable all interrupts */
        call_pal PAL_OSF1_swpipl
  2:    ldl     t0, sched_whichqs               /* look for non-empty queue */
        beq     t0, 2b
        ldiq    a0, ALPHA_PSL_IPL_HIGH          /* disable all interrupts */
        call_pal PAL_OSF1_swpipl
!       jmp     zero, cpu_switch_queuescan      /* jump back into the fray */
        END(idle)
  
  /*
--- 782,800 ----
        /* Note: GET_CURPROC clobbers v0, t0, t8...t11. */
        GET_CURPROC
        stq     zero, 0(v0)                     /* curproc <- NULL for stats */
+ #if defined(MULTIPROCESSOR) || defined(LOCKDEBUG)
+       CALL(sched_unlock_idle)                 /* release sched_lock */
+ #endif
        mov     zero, a0                        /* enable all interrupts */
        call_pal PAL_OSF1_swpipl
  2:    ldl     t0, sched_whichqs               /* look for non-empty queue */
        beq     t0, 2b
        ldiq    a0, ALPHA_PSL_IPL_HIGH          /* disable all interrupts */
        call_pal PAL_OSF1_swpipl
! #if defined(MULTIPROCESSOR) || defined(LOCKDEBUG)
!       CALL(sched_lock_idle)                   /* acquire sched_lock */
! #endif
!       jmp     zero, cpu_switch_queuescan      /* jump back into the fire */
        END(idle)
  
  /*
***************
*** 821,828 ****
        ldl     t0, sched_whichqs               /* look for non-empty queue */
        beq     t0, idle                        /* and if none, go idle */
  
-       ldiq    a0, ALPHA_PSL_IPL_HIGH          /* disable all interrupts */
-       call_pal PAL_OSF1_swpipl
  cpu_switch_queuescan:
        br      pv, 1f
  1:    LDGP(pv)
--- 828,833 ----
***************
*** 863,868 ****
--- 868,880 ----
  5:
        mov     t4, s2                          /* save new proc */
        ldq     s3, P_MD_PCBPADDR(s2)           /* save new pcbpaddr */
+ #if defined(MULTIPROCESSOR) || defined(LOCKDEBUG)
+       /*
+        * Done mucking with the run queues, release the
+        * scheduler lock, but keep interrupts out.
+        */
+       CALL(sched_unlock_idle)
+ #endif
  
        /*
         * Check to see if we're switching to ourself.  If we are,
***************
*** 874,880 ****
         * saved it.  Also note that switch_exit() ensures that
         * s0 is clear before jumping here to find a new process.
         */
!       cmpeq   s0, t4, t0                      /* oldproc == newproc? */
        bne     t0, 7f                          /* Yes!  Skip! */
  
        /*
--- 886,892 ----
         * saved it.  Also note that switch_exit() ensures that
         * s0 is clear before jumping here to find a new process.
         */
!       cmpeq   s0, s2, t0                      /* oldproc == newproc? */
        bne     t0, 7f                          /* Yes!  Skip! */
  
        /*
***************
*** 1038,1043 ****
--- 1050,1059 ----
        /* Schedule the vmspace and stack to be freed. */
        mov     s2, a0
        CALL(exit2)
+ 
+ #if defined(MULTIPROCESSOR) || defined(LOCKDEBUG)
+       CALL(sched_lock_idle)                   /* acquire sched_lock */
+ #endif
  
        /*
         * Now jump back into the middle of cpu_switch().  Note that

-- 
        -- Jason R. Thorpe <thorpej%zembu.com@localhost>



Home | Main Index | Thread Index | Old Index