Subject: possible new "simple_lock: locking against myself" bug on dual-CPU AS4000
To: NetBSD port-alpha List <port-alpha@NetBSD.org>
From: Greg A. Woods <woods@weird.com>
List: port-alpha
Date: 10/17/2005 19:32:07
A week or so ago, due to what seemed to be a failure (iod0 and iod1
failed power-up tests and the machine went into an endless machine check
loop before the SRM prompt ever appeared) of my Dodge-box AS4000 (wide
AS4100 backplane but with only two CPU connectors installed) I had
switched over to a spare AS4000 Durango chassis (traditional narrow
AS4000 backplane, but only one saddle installed) by moving all the
cards, including CPUs and RAM, into it.

However this second AS4000 machine failed to run the very same 1.6.x MP
kernel that had been running on the old machine, always giving me the
following crash shortly after init started, usually during fsck, but not
always:

simple_lock: locking against myself
lock: 0xfffffc000078a398, currently at: /work/woods/m-NetBSD-1.6/sys/arch/alpha/compile/BUILDING.MP/../../../../arch/alpha/alpha/pmap.c:3983
on cpu 1
last locked: /work/woods/m-NetBSD-1.6/sys/arch/alpha/compile/BUILDING.MP/../../../../arch/alpha/alpha/pmap.c:3983
last unlocked: /work/woods/m-NetBSD-1.6/sys/arch/alpha/compile/BUILDING.MP/../../../../arch/alpha/alpha/pmap.c:3950
alpha trace requires known PC =eject=
Stopped in pid 41 (fsck_ffs) at cpu_Debugger+0x4:       ret     zero,(ra)
db{1}> call simple_lock_dump
all simple locks:
0xfffffc000078c560 CPU 0 /work/woods/m-NetBSD-1.6/sys/arch/alpha/compile/BUILDING.MP/../../../../arch/alpha/alpha/pmap.c:1432
0xfffffc000078a398 CPU 1 /work/woods/m-NetBSD-1.6/sys/arch/alpha/compile/BUILDING.MP/../../../../arch/alpha/alpha/pmap.c:3983
       0x6
db{1}> trace
cpu_Debugger() at cpu_Debugger+0x4
_simple_lock() at _simple_lock+0x140
pmap_do_tlb_shootdown() at pmap_do_tlb_shootdown+0x90
alpha_ipi_process() at alpha_ipi_process+0xc4
interrupt() at interrupt+0x90
XentInt() at XentInt+0x1c
--- interrupt (from ipl 5) ---
_simple_lock() at _simple_lock+0x358
pmap_do_tlb_shootdown() at pmap_do_tlb_shootdown+0x90
alpha_ipi_process() at alpha_ipi_process+0xc4
interrupt() at interrupt+0x90
XentInt() at XentInt+0x1c
--- interrupt (from ipl 0) ---
_lockmgr() at _lockmgr+0x1018
_kernel_proc_lock() at _kernel_proc_lock+0x6c
syscall_plain() at syscall_plain+0x38
XentSys() at XentSys+0x5c
--- syscall (198) ---
--- user mode ---
db{1}> 


I don't think this is a new bug, since according to my notes this isn't
the first time I've seen this particular pair of "last locked/last
unlocked" lines in a crash, though at that time I don't think I had
LOCKDEBUG, or at least I didn't know to call simple_lock_dump().

It also was not fixed by the recent PMAP_NO_LAZY_LEV1MAP changes (see below).

I don't think this has been reported before, and certainly not by me, so
does anyone think this is worth submitting as a PR?

That same AS4000 ran the non-MP kernel just fine for about a week
though, so I don't think it was caused by any faulty hardware.

The weird thing is that I can't explain why the same pair of CPUs in a
different backplane would suddenly fail to run the same kernel, unless
of course there is/was actually some hardware problem with one of the
CPUs.

If I found another pair of AS4x00 memory boards I could probably test
this again, but for now the machine is powered off and RAM-less, and
instead I'm now successfully running this same kernel on a "new-to-me"
AS4100 with 3 CPUs, with a current uptime of 17 hours (including an
/etc/daily et al run) and I have also done several "nbmake -j 6" kernel
builds (GENERIC.MP in 12.5 mins, including .depend) without incident as
well.


For the sake of finding the line numbers, etc., note my pmap.c already
contains the following changes and pullups:

Index: sys/arch/alpha/alpha/pmap.c
===================================================================
RCS file: /cvs/master/m-NetBSD/main/src/sys/arch/alpha/alpha/pmap.c,v
retrieving revision 1.191.8.1
diff -u -r1.191.8.1 pmap.c
--- sys/arch/alpha/alpha/pmap.c	24 Nov 2002 15:38:39 -0000	1.191.8.1
+++ sys/arch/alpha/alpha/pmap.c	7 Oct 2005 17:13:44 -0000
@@ -1,5 +1,8 @@
 /* $NetBSD: pmap.c,v 1.191.8.1 2002/11/24 15:38:39 tron Exp $ */
 
+/* pulled up: */
+/* $NetBSD: pmap.c,v 1.211 2005/07/26 04:11:53 thorpej Exp $ */
+
 /*-
  * Copyright (c) 1998, 1999, 2000, 2001 The NetBSD Foundation, Inc.
  * All rights reserved.
@@ -192,7 +195,11 @@
 #define	PDB_PVDUMP	0x8000
 
 int debugmap = 0;
+# ifdef PDB_DEFAULT
+int pmapdebug = PDB_DEFAULT;
+# else
 int pmapdebug = PDB_PARANOIA;
+# endif
 #endif
 
 /*
@@ -532,6 +539,17 @@
 int	pmap_physpage_delref(void *);
 
 /*
+ * Define PMAP_NO_LAZY_LEV1MAP in order to have a lev1map allocated
+ * in pmap_create(), rather than when the first mapping is entered.
+ * This causes pmaps to use an extra page of memory if no mappings
+ * are entered in them, but in practice this is probably not going
+ * to be a problem, and it allows us to avoid locking pmaps in
+ * pmap_activate(), which in turn allows us to avoid a deadlock with
+ * sched_lock via cpu_switch().
+ */
+#define	PMAP_NO_LAZY_LEV1MAP
+
+/*
  * PMAP_ISACTIVE{,_TEST}:
  *
  *	Check to see if a pmap is active on the current processor.
@@ -828,6 +846,13 @@
 #endif
 	lev3mapsize = roundup(lev3mapsize, NPTEPG);
 
+#if defined(DEBUG)
+	if (pmapdebug & PDB_BOOTSTRAP)
+		printf("kernel_lev3mapsize = 0x%lx(%lu)\n",
+		       (unsigned long) lev3mapsize,
+		       (unsigned long) lev3mapsize);
+#endif
+
 	/*
 	 * Initialize `FYI' variables.  Note we're relying on
 	 * the fact that BSEARCH sorts the vm_physmem[] array
@@ -837,10 +862,12 @@
 	avail_end = ptoa(vm_physmem[vm_nphysseg - 1].end);
 	virtual_end = VM_MIN_KERNEL_ADDRESS + lev3mapsize * PAGE_SIZE;
 
-#if 0
-	printf("avail_start = 0x%lx\n", avail_start);
-	printf("avail_end = 0x%lx\n", avail_end);
-	printf("virtual_end = 0x%lx\n", virtual_end);
+#if defined(DEBUG)
+	if (pmapdebug & PDB_BOOTSTRAP) {
+		printf("avail_start = 0x%lx\n", avail_start);
+		printf("avail_end = 0x%lx\n", avail_end);
+		printf("virtual_end = 0x%lx\n", virtual_end);
+	}
 #endif
 
 	/*
@@ -851,15 +878,35 @@
 	kernel_lev1map = (pt_entry_t *)
 	    uvm_pageboot_alloc(sizeof(pt_entry_t) * NPTEPG);
 
+#if defined(DEBUG)
+	if (pmapdebug & PDB_BOOTSTRAP)
+		printf("kernel_lev1map = 0x%lx\n", (unsigned long) kernel_lev1map);
+#endif
+
 	/*
 	 * Allocate a level 2 PTE table for the kernel.
 	 * These must map all of the level3 PTEs.
 	 * IF THIS IS NOT A MULTIPLE OF NBPG, ALL WILL GO TO HELL.
 	 */
 	lev2mapsize = roundup(howmany(lev3mapsize, NPTEPG), NPTEPG);
+
+#if defined(DEBUG)
+	if (pmapdebug & PDB_BOOTSTRAP)
+		printf("kernel_lev2mapsize = 0x%lx(%lu), NPTEPG = %lu, sizeof(pt_entry_t) = %lu\n",
+		       (unsigned long) lev2mapsize,
+		       (unsigned long) lev2mapsize,
+		       (unsigned long) NPTEPG,
+		       (unsigned long) sizeof(pt_entry_t));
+#endif
+
 	lev2map = (pt_entry_t *)
 	    uvm_pageboot_alloc(sizeof(pt_entry_t) * lev2mapsize);
 
+#if defined(DEBUG)
+	if (pmapdebug & PDB_BOOTSTRAP)
+		printf("kernel_lev2map = 0x%lx\n", (unsigned long) lev2map);
+#endif
+
 	/*
 	 * Allocate a level 3 PTE table for the kernel.
 	 * Contains lev3mapsize PTEs.
@@ -867,6 +914,11 @@
 	lev3map = (pt_entry_t *)
 	    uvm_pageboot_alloc(sizeof(pt_entry_t) * lev3mapsize);
 
+#if defined(DEBUG)
+	if (pmapdebug & PDB_BOOTSTRAP)
+		printf("kernel_lev3map = 0x%lx\n", (unsigned long) lev3map);
+#endif
+
 	/*
 	 * Set up level 1 page table
 	 */
@@ -880,6 +932,11 @@
 		    (i*PAGE_SIZE*NPTEPG*NPTEPG))] = pte;
 	}
 
+#if defined(DEBUG)
+	if (pmapdebug & PDB_BOOTSTRAP)
+		printf("level 2 PTE pages mapped\n");
+#endif
+
 	/* Map the virtual page table */
 	pte = (ALPHA_K0SEG_TO_PHYS((vaddr_t)kernel_lev1map) >> PGSHIFT)
 	    << PG_SHIFT;
@@ -920,9 +977,19 @@
 		    (i*PAGE_SIZE*NPTEPG))] = pte;
 	}
 
+#if defined(DEBUG)
+	if (pmapdebug & PDB_BOOTSTRAP)
+		printf("level 2 page table initialized\n");
+#endif
+
 	/* Initialize the pmap_growkernel_slock. */
 	simple_lock_init(&pmap_growkernel_slock);
 
+#if defined(DEBUG)
+	if (pmapdebug & PDB_BOOTSTRAP)
+		printf("pmap_growkernel_slock initialized\n");
+#endif
+
 	/*
 	 * Set up level three page table (lev3map)
 	 */
@@ -944,6 +1011,11 @@
 
 	TAILQ_INIT(&pmap_all_pmaps);
 
+#if defined(DEBUG)
+	if (pmapdebug & PDB_BOOTSTRAP)
+		printf("pmap pools and list initialized\n");
+#endif
+
 	/*
 	 * Initialize the ASN logic.
 	 */
@@ -952,6 +1024,10 @@
 		pmap_asn_info[i].pma_asn = 1;
 		pmap_asn_info[i].pma_asngen = 0;
 	}
+#if defined(DEBUG)
+	if (pmapdebug & PDB_BOOTSTRAP)
+		printf("pmap_asn_info initialized\n");
+#endif
 
 	/*
 	 * Initialize the locks.
@@ -977,6 +1053,11 @@
 	simple_lock_init(&pmap_kernel()->pm_slock);
 	TAILQ_INSERT_TAIL(&pmap_all_pmaps, pmap_kernel(), pm_list);
 
+#if defined(DEBUG)
+	if (pmapdebug & PDB_BOOTSTRAP)
+		printf("pmap_kernel initialized\n");
+#endif
+
 #if defined(MULTIPROCESSOR)
 	/*
 	 * Initialize the TLB shootdown queues.
@@ -987,6 +1068,10 @@
 		TAILQ_INIT(&pmap_tlb_shootdown_q[i].pq_head);
 		simple_lock_init(&pmap_tlb_shootdown_q[i].pq_slock);
 	}
+# if defined(DEBUG)
+	if (pmapdebug & PDB_BOOTSTRAP)
+		printf("pmap_tlb_shootdown_job_pool initialized\n");
+# endif
 #endif
 
 	/*
@@ -1003,6 +1088,11 @@
 	 */
 	atomic_setbits_ulong(&pmap_kernel()->pm_cpus,
 	    (1UL << cpu_number()));
+
+#if defined(DEBUG)
+	if (pmapdebug & (PDB_FOLLOW|PDB_BOOTSTRAP))
+		printf("pmap_bootstrap() done!\n");
+#endif
 }
 
 #ifdef _PMAP_MAY_USE_PROM_CONSOLE
@@ -1140,8 +1230,8 @@
 {
 
 #ifdef DEBUG
-        if (pmapdebug & PDB_FOLLOW)
-                printf("pmap_init()\n");
+	if (pmapdebug & (PDB_FOLLOW|PDB_INIT))
+		printf("pmap_init()\n");
 #endif
 
 	/* initialize protection array */
@@ -1159,16 +1249,20 @@
 	 */
 	pmap_initialized = TRUE;
 
-#if 0
+#if defined(DEBUG)
+    if (pmapdebug & PDB_INIT) {
+	int bank;
+
 	for (bank = 0; bank < vm_nphysseg; bank++) {
-		printf("bank %d\n", bank);
-		printf("\tstart = 0x%x\n", ptoa(vm_physmem[bank].start));
-		printf("\tend = 0x%x\n", ptoa(vm_physmem[bank].end));
-		printf("\tavail_start = 0x%x\n",
-		    ptoa(vm_physmem[bank].avail_start));
-		printf("\tavail_end = 0x%x\n",
-		    ptoa(vm_physmem[bank].avail_end));
+		printf("vm_physmem bank %d\n", bank);
+		printf("\tstart = 0x%lx\n", (unsigned long) ptoa(vm_physmem[bank].start));
+		printf("\tend = 0x%lx\n", (unsigned long) ptoa(vm_physmem[bank].end));
+		printf("\tavail_start = 0x%lx\n",
+		       (unsigned long) ptoa(vm_physmem[bank].avail_start));
+		printf("\tavail_end = 0x%lx\n",
+		       (unsigned long) ptoa(vm_physmem[bank].avail_end));
 	}
+    }
 #endif
 }
 
@@ -1212,6 +1306,11 @@
 	TAILQ_INSERT_TAIL(&pmap_all_pmaps, pmap, pm_list);
 	simple_unlock(&pmap_all_pmaps_slock);
 
+#ifdef PMAP_NO_LAZY_LEV1MAP
+	i = pmap_lev1map_create(pmap, cpu_number());
+	KASSERT(i == 0);
+#endif
+
 	return (pmap);
 }
 
@@ -1245,14 +1344,16 @@
 	TAILQ_REMOVE(&pmap_all_pmaps, pmap, pm_list);
 	simple_unlock(&pmap_all_pmaps_slock);
 
-#ifdef DIAGNOSTIC
+#ifdef PMAP_NO_LAZY_LEV1MAP
+	pmap_lev1map_destroy(pmap, cpu_number());
+#endif
+
 	/*
 	 * Since the pmap is supposed to contain no valid
-	 * mappings at this point, this should never happen.
+	 * mappings at this point, we should always see
+	 * kernel_lev1map here.
 	 */
-	if (pmap->pm_lev1map != kernel_lev1map)
-		panic("pmap_destroy: pmap still contains valid mappings");
-#endif
+	KASSERT(pmap->pm_lev1map == kernel_lev1map);
 
 	pool_put(&pmap_pmap_pool, pmap);
 }
@@ -1315,7 +1416,7 @@
 
 #ifdef DEBUG
 	if (pmapdebug & (PDB_FOLLOW|PDB_REMOVE|PDB_PROTECT))
-		printf("pmap_remove(%p, %lx, %lx)\n", pmap, sva, eva);
+		printf("pmap_do_remove(%p, %lx, %lx)\n", pmap, sva, eva);
 #endif
 
 	/*
@@ -1693,6 +1794,9 @@
 			panic("pmap_enter: user pmap, invalid va 0x%lx", va);
 #endif
 
+#ifdef PMAP_NO_LAZY_LEV1MAP
+		KASSERT(pmap->pm_lev1map != kernel_lev1map);
+#else
 		/*
 		 * If we're still referencing the kernel kernel_lev1map,
 		 * create a new level 1 page table.  A reference will be
@@ -1723,6 +1827,7 @@
 				panic("pmap_enter: unable to create lev1map");
 			}
 		}
+#endif /* PMAP_NO_LAZY_LEV1MAP */
 
 		/*
 		 * Check to see if the level 1 PTE is valid, and
@@ -2131,12 +2236,27 @@
 {
 	pt_entry_t *l1pte, *l2pte, *l3pte;
 	paddr_t pa;
-	boolean_t rv = FALSE;
 
 #ifdef DEBUG
 	if (pmapdebug & PDB_FOLLOW)
 		printf("pmap_extract(%p, %lx) -> ", pmap, va);
 #endif
+
+	/*
+	 * Take a faster path for the kernel pmap.  Avoids locking,
+	 * handles K0SEG.
+	 */
+	if (pmap == pmap_kernel()) {
+		pa = vtophys(va);
+		if (pap != NULL)
+			*pap = pa;
+#ifdef DEBUG
+		if (pmapdebug & PDB_FOLLOW)
+			printf("0x%lx (kernel vtophys)\n", pa);
+#endif
+		return (pa != 0);	/* XXX */
+	}
+
 	PMAP_LOCK(pmap);
 
 	l1pte = pmap_l1pte(pmap, va);
@@ -2152,21 +2272,22 @@
 		goto out;
 
 	pa = pmap_pte_pa(l3pte) | (va & PGOFSET);
+	PMAP_UNLOCK(pmap);
 	if (pap != NULL)
 		*pap = pa;
-	rv = TRUE;
+#ifdef DEBUG
+	if (pmapdebug & PDB_FOLLOW)
+		printf("0x%lx\n", pa);
+#endif
+	return (TRUE);
 
  out:
 	PMAP_UNLOCK(pmap);
 #ifdef DEBUG
-	if (pmapdebug & PDB_FOLLOW) {
-		if (rv)
-			printf("0x%lx\n", pa);
-		else
-			printf("failed\n");
-	}
+	if (pmapdebug & PDB_FOLLOW)
+		printf("failed\n");
 #endif
-	return (rv);
+	return (FALSE);
 }
 
 /*
@@ -2246,21 +2367,21 @@
 		printf("pmap_activate(%p)\n", p);
 #endif
 
+#ifndef PMAP_NO_LAZY_LEV1MAP
 	PMAP_LOCK(pmap);
+#endif
 
-	/*
-	 * Mark the pmap in use by this processor.
-	 */
+	/* Mark the pmap in use by this processor. */
 	atomic_setbits_ulong(&pmap->pm_cpus, (1UL << cpu_id));
 
-	/*
-	 * Allocate an ASN.
-	 */
+	/* Allocate an ASN. */
 	pmap_asn_alloc(pmap, cpu_id);
 
 	PMAP_ACTIVATE(pmap, p, cpu_id);
 
+#ifndef PMAP_NO_LAZY_LEV1MAP
 	PMAP_UNLOCK(pmap);
+#endif
 }
 
 /*
@@ -2385,8 +2506,8 @@
 	if (pmapdebug & PDB_FOLLOW)
 		printf("pmap_copy_page(%lx, %lx)\n", src, dst);
 #endif
-        s = (caddr_t)ALPHA_PHYS_TO_K0SEG(src);
-        d = (caddr_t)ALPHA_PHYS_TO_K0SEG(dst);
+	s = (caddr_t)ALPHA_PHYS_TO_K0SEG(src);
+	d = (caddr_t)ALPHA_PHYS_TO_K0SEG(dst);
 	memcpy(d, s, PAGE_SIZE);
 }
 
@@ -2702,7 +2823,7 @@
 
 #ifdef DEBUG
 	if (pmapdebug & PDB_BITS)
-		printf("pmap_changebit(0x%p, 0x%lx, 0x%lx)\n",
+		printf("pmap_changebit(%p, 0x%lx, 0x%lx)\n",
 		    pg, set, mask);
 #endif
 
@@ -3043,15 +3164,15 @@
 	if (pg != NULL) {
 		pa = VM_PAGE_TO_PHYS(pg);
 
+#ifdef DEBUG
 		simple_lock(&pg->mdpage.pvh_slock);
-#ifdef DIAGNOSTIC
 		if (pg->wire_count != 0) {
 			printf("pmap_physpage_alloc: page 0x%lx has "
 			    "%d references\n", pa, pg->wire_count);
 			panic("pmap_physpage_alloc");
 		}
-#endif
 		simple_unlock(&pg->mdpage.pvh_slock);
+#endif
 		*pap = pa;
 		return (TRUE);
 	}
@@ -3071,12 +3192,12 @@
 	if ((pg = PHYS_TO_VM_PAGE(pa)) == NULL)
 		panic("pmap_physpage_free: bogus physical page address");
 
+#ifdef DEBUG
 	simple_lock(&pg->mdpage.pvh_slock);
-#ifdef DIAGNOSTIC
 	if (pg->wire_count != 0)
 		panic("pmap_physpage_free: page still has references");
-#endif
 	simple_unlock(&pg->mdpage.pvh_slock);
+#endif
 
 	uvm_pagefree(pg);
 }
@@ -3269,12 +3390,18 @@
 		panic("pmap_lev1map_create: pmap uses non-reserved ASN");
 #endif
 
+#ifdef PMAP_NO_LAZY_LEV1MAP
+	/* Being called from pmap_create() in this case; we can sleep. */
+	l1pt = pool_cache_get(&pmap_l1pt_cache, PR_WAITOK);
+#else
 	l1pt = pool_cache_get(&pmap_l1pt_cache, PR_NOWAIT);
+#endif
 	if (l1pt == NULL)
 		return (ENOMEM);
 
 	pmap->pm_lev1map = l1pt;
 
+#ifndef PMAP_NO_LAZY_LEV1MAP	/* guaranteed not to be active */
 	/*
 	 * The page table base has changed; if the pmap was active,
 	 * reactivate it.
@@ -3284,6 +3411,7 @@
 		PMAP_ACTIVATE(pmap, curproc, cpu_id);
 	}
 	PMAP_LEV1MAP_SHOOTDOWN(pmap, cpu_id);
+#endif /* ! PMAP_NO_LAZY_LEV1MAP */
 	return (0);
 }
 
@@ -3309,6 +3437,7 @@
 	 */
 	pmap->pm_lev1map = kernel_lev1map;
 
+#ifndef PMAP_NO_LAZY_LEV1MAP	/* pmap is being destroyed */
 	/*
 	 * The page table base has changed; if the pmap was active,
 	 * reactivate it.  Note that allocation of a new ASN is
@@ -3331,6 +3460,7 @@
 	if (PMAP_ISACTIVE(pmap, cpu_id))
 		PMAP_ACTIVATE(pmap, curproc, cpu_id);
 	PMAP_LEV1MAP_SHOOTDOWN(pmap, cpu_id);
+#endif /* ! PMAP_NO_LAZY_LEV1MAP */
 
 	/*
 	 * Free the old level 1 page table page.
@@ -3569,11 +3699,13 @@
 #endif
 
 	if (pmap_physpage_delref(l1pte) == 0) {
+#ifndef PMAP_NO_LAZY_LEV1MAP
 		/*
 		 * No more level 2 tables left, go back to the global
 		 * kernel_lev1map.
 		 */
 		pmap_lev1map_destroy(pmap, cpu_id);
+#endif /* ! PMAP_NO_LAZY_LEV1MAP */
 	}
 }
 
@@ -3605,6 +3737,13 @@
 	 * kernel mappings exist in that map, and all kernel mappings
 	 * have PG_ASM set.  If the pmap eventually gets its own
 	 * lev1map, an ASN will be allocated at that time.
+	 *
+	 * #ifdef PMAP_NO_LAZY_LEV1MAP
+	 * Only the kernel pmap will reference kernel_lev1map.  Do the
+	 * same old fixups, but note that we no longer need the pmap
+	 * to be locked if we're in this mode, since pm_lev1map will
+	 * never change.
+	 * #endif
 	 */
 	if (pmap->pm_lev1map == kernel_lev1map) {
 #ifdef DEBUG
@@ -3625,11 +3764,7 @@
 		 */
 		pma->pma_asn = PMAP_ASN_RESERVED;
 #else
-#ifdef DIAGNOSTIC
-		if (pma->pma_asn != PMAP_ASN_RESERVED)
-			panic("pmap_asn_alloc: kernel_lev1map without "
-			    "PMAP_ASN_RESERVED");
-#endif
+		KASSERT(pma->pma_asn == PMAP_ASN_RESERVED);
 #endif /* MULTIPROCESSOR */
 		return;
 	}




-- 
						Greg A. Woods

H:+1 416 218-0098  W:+1 416 489-5852 x122  VE3TCP  RoboHack <woods@robohack.ca>
Planix, Inc. <woods@planix.com>          Secrets of the Weird <woods@weird.com>