Subject: More on reaper removal
To: None <tech-kern@netbsd.org>
From: Jaromir Dolecek <jdolecek@netbsd.org>
List: tech-kern
Date: 12/05/2003 23:01:06
Hi,

here are some comparison of difference between forkbench teardown
performance with current reaper and with code arranged to self-destruct
without additional context switch. The test was done with 1.6ZF dual-proc
i386, which was otherwise completely idle.

Reaper (the current way):

http://www.netbsd.cz/reaper/reaper.gif

Self-destruct (new way):

http://www.netbsd.cz/reaper/selfdestruct.gif

The change is clear win (as expected).

Here are the overview of changes, with some changes since last
time:

1. exiting process calls pmap_deactivate(),
   then uvm_proc_exit() and (new) cpu_lwp_free(); cpu_lwp_free() is last
   action which can block
2. exiting process is immediatelly marked SZOMB and is avaiable for
   parent to collect; the detached last lwp continues to cpu_exit()
3. uvmexp.swtch++ is done in MI code before calling cpu_exit()
4.  the last lwp exits via cpu_exit(); cpu_exit() has changed signature, the
   'proc' parameter is gone

cpu_lwp_free() is descendant of lwp_wait(), in that it frees any
MD resources which can be freed while still in that lwp's
context. It now also frees other MD resources like FPU etc. - basically
it arranges for cpu_exit() path to not need the former process context
anymore. This makes it possible to release KERNEL_PROC_LOCK before dispatch
to MD code.

Unfortunately this requires a bit more MD changes. I will be able
to adjust most of MD stuff in C, but I'll need some assistance
with ports which have the parts written in assembler. I'll prepare
pre-final patch for further comments later.

Attached is patch with MI changes and MD part for i386.

BTW - why some ports call splhigh() et.al. in cpu_exit() before
calling the final assembler-written routine? This is the case e.g.
on Alpha.

Jaromir

Index: arch/i386/i386/locore.S
===================================================================
RCS file: /cvsroot/src/sys/arch/i386/i386/locore.S,v
retrieving revision 1.20
diff -u -p -r1.20 locore.S
--- arch/i386/i386/locore.S	4 Nov 2003 10:33:15 -0000	1.20
+++ arch/i386/i386/locore.S	5 Dec 2003 21:27:05 -0000
@@ -2028,7 +2028,7 @@ ENTRY(cpu_switchto)
 	jmp	switch_resume
 
 /*
- * void switch_exit(struct lwp *l, void (*exit)(struct lwp *));
+ * void cpu_exit(struct lwp *l)
  * Switch to the appropriate idle context (lwp0's if uniprocessor; the cpu's 
  * if multiprocessor) and deallocate the address space and kernel stack for p. 
  * Then jump into cpu_switch(), as if we were in the idle proc all along.
@@ -2038,10 +2038,9 @@ ENTRY(cpu_switchto)
 #endif
 	.globl  _C_LABEL(uvmspace_free),_C_LABEL(kernel_map)
 	.globl	_C_LABEL(uvm_km_free),_C_LABEL(tss_free)
-/* LINTSTUB: Func: void switch_exit(struct lwp *l, void (*exit)(struct lwp *)) */
-ENTRY(switch_exit)
+/* LINTSTUB: Func: void cpu_exit(struct lwp *l) */
+ENTRY(cpu_exit)
 	movl	4(%esp),%edi		# old process
-	movl	8(%esp),%eax		# exit func
 #ifndef MULTIPROCESSOR
 	movl	$_C_LABEL(lwp0),%ebx
 	movl	L_ADDR(%ebx),%esi
@@ -2060,9 +2059,6 @@ ENTRY(switch_exit)
 	movl	PCB_ESP(%esi),%esp
 	movl	PCB_EBP(%esi),%ebp
 
-	/* Save exit func. */
-	pushl	%eax
-
 	/* Load TSS info. */
 #ifdef MULTIPROCESSOR
 	movl	CPUVAR(GDT),%eax
@@ -2092,11 +2088,10 @@ ENTRY(switch_exit)
 	sti
 
 	/*
-	 * Schedule the dead process's vmspace and stack to be freed.
+	 * Schedule the dead LWP's stack to be freed.
 	 */
-	movl	0(%esp),%eax		/* %eax = exit func */
-	movl	%edi,0(%esp)		/* {lwp_}exit2(l) */
-	call	*%eax
+	pushl	%edi
+	call	_C_LABEL(lwp_exit2)
 	addl	$4,%esp
 
 	/* Jump into cpu_switch() with the right state. */
Index: arch/i386/i386/vm_machdep.c
===================================================================
RCS file: /cvsroot/src/sys/arch/i386/i386/vm_machdep.c,v
retrieving revision 1.112
diff -u -p -r1.112 vm_machdep.c
--- arch/i386/i386/vm_machdep.c	27 Oct 2003 14:11:47 -0000	1.112
+++ arch/i386/i386/vm_machdep.c	5 Dec 2003 21:27:05 -0000
@@ -249,18 +249,12 @@ cpu_swapout(l)
 }
 
 /*
- * cpu_exit is called as the last action during exit.
- *
- * We clean up a little and then call switch_exit() with the old proc as an
- * argument.  switch_exit() first switches to proc0's context, and finally
- * jumps into switch() to wait for another process to wake up.
- * 
- * If proc==0, we're an exiting lwp, and call switch_lwp_exit() instead of 
- * switch_exit(), and only do LWP-appropriate cleanup (e.g. don't deactivate
- * the pmap).
+ * cpu_lwp_free is called from exit() to let machine-dependent
+ * code free machine-dependent resources that should be cleaned
+ * while we can still block and have process associated with us
  */
 void
-cpu_exit(struct lwp *l, int proc)
+cpu_lwp_free(struct lwp *l, int proc)
 {
 
 #if NNPX > 0
@@ -274,35 +268,11 @@ cpu_exit(struct lwp *l, int proc)
 		mtrr_clean(l->l_proc);
 #endif
 
-	/*
-	 * No need to do user LDT cleanup here; it's handled in
-	 * pmap_destroy().
-	 */
-
-	/*
-	 * Deactivate the address space before the vmspace is
-	 * freed.  Note that we will continue to run on this
-	 * vmspace's context until the switch to the idle process
-	 * in switch_exit().
-	 */
-	pmap_deactivate(l);
-
-	uvmexp.swtch++;
-	switch_exit(l, proc ? exit2 : lwp_exit2);
-}
-
-/*
- * cpu_wait is called from reaper() to let machine-dependent
- * code free machine-dependent resources that couldn't be freed
- * in cpu_exit().
- */
-void
-cpu_wait(l)
-	struct lwp *l;
-{
-
 	/* Nuke the TSS. */
 	tss_free(l->l_md.md_tss_sel);
+#ifdef DEBUG
+	l->l_md.md_tss_sel = 0xfeedbeed;
+#endif
 }
 
 /*
Index: kern/init_main.c
===================================================================
RCS file: /cvsroot/src/sys/kern/init_main.c,v
retrieving revision 1.227
diff -u -p -r1.227 init_main.c
--- kern/init_main.c	14 Nov 2003 07:13:25 -0000	1.227
+++ kern/init_main.c	5 Dec 2003 21:27:07 -0000
@@ -566,10 +566,6 @@ main(void)
 	if (kthread_create1(uvm_pageout, NULL, NULL, "pagedaemon"))
 		panic("fork pagedaemon");
 
-	/* Create the process reaper kernel thread. */
-	if (kthread_create1(reaper, NULL, NULL, "reaper"))
-		panic("fork reaper");
-
 	/* Create the filesystem syncer kernel thread. */
 	if (kthread_create1(sched_sync, NULL, NULL, "ioflush"))
 		panic("fork syncer");
Index: kern/kern_exit.c
===================================================================
RCS file: /cvsroot/src/sys/kern/kern_exit.c,v
retrieving revision 1.129
diff -u -p -r1.129 kern_exit.c
--- kern/kern_exit.c	17 Nov 2003 22:52:09 -0000	1.129
+++ kern/kern_exit.c	5 Dec 2003 21:27:08 -0000
@@ -131,7 +131,7 @@ static void lwp_exit_hook(struct lwp *, 
 static void exit_psignal(struct proc *, struct proc *);
 
 /*
- * Fill in the appropriate signal information, and kill the parent.
+ * Fill in the appropriate signal information, and signal the parent.
  */
 static void
 exit_psignal(struct proc *p, struct proc *pp)
@@ -233,9 +233,15 @@ exit1(struct lwp *l, int rv)
 	p->p_sigctx.ps_sigcheck = 0;
 	timers_free(p, TIMERS_ALL);
 
-	if (sa || (p->p_nlwps > 1))
+	if (sa || (p->p_nlwps > 1)) {
 		exit_lwps(l);
 
+		/*
+		 * Collect thread u-areas.
+		 */
+		uvm_uarea_drain(FALSE);
+	}
+
 #if defined(__HAVE_RAS)
 	ras_purgeall(p);
 #endif
@@ -327,9 +333,7 @@ exit1(struct lwp *l, int rv)
 	 * Give orphaned children to init(8).
 	 */
 	q = LIST_FIRST(&p->p_children);
-	if (q)		/* only need this if any child is S_ZOMB */
-		wakeup(initproc);
-	for (; q != 0; q = nq) {
+	for (; q != NULL; q = nq) {
 		nq = LIST_NEXT(q, p_sibling);
 
 		/*
@@ -351,10 +355,52 @@ exit1(struct lwp *l, int rv)
 		} else {
 			proc_reparent(q, initproc);
 		}
+
+		/*
+		 * If child is already zombie, notify the new parent,
+		 * so that they'd know they should collect it.
+		 */
+		if (q->p_stat == SZOMB) {
+			if ((q->p_flag & P_FSTRACE) == 0 && q->p_exitsig != 0)
+				exit_psignal(q, q->p_pptr);
+			wakeup(q->p_pptr);
+		}
 	}
 	proclist_unlock_write(s);
 
 	/*
+	 * Deactivate the address space before the vmspace is
+	 * freed.  Note that we will continue to run on this
+	 * vmspace's context until the switch to the idle process.
+	 */
+	pmap_deactivate(l);
+
+	/*
+	 * Free the VM resources we're still holding on to.
+	 * We must do this from a valid thread because doing
+	 * so may block. This frees vmspace, which we don't
+	 * need anymore. The only remaining lwp is the one
+	 * we run at this moment, nothing runs in userland
+	 * anymore.
+	 */
+	uvm_proc_exit(p);
+
+	/*
+	 * Give machine-dependent code a chance to free any
+	 * MD LWP resources while we can still block. This must be done
+	 * before uvm_lwp_exit(), in case these resources are in the 
+	 * PCB.
+	 * THIS IS LAST BLOCKING OPERATION.
+	 */
+#ifndef __NO_CPU_LWP_FREE
+	cpu_lwp_free(l, 1);
+#endif
+
+	/*
+	 * NOTE: WE ARE NO LONGER ALLOWED TO SLEEP!
+	 */
+
+	/*
 	 * Save exit status and final rusage info, adding in child rusage
 	 * info and self times.
 	 * In order to pick up the time for the current execution, we must
@@ -366,24 +412,21 @@ exit1(struct lwp *l, int rv)
 	ruadd(p->p_ru, &p->p_stats->p_cru);
 
 	/*
-	 * NOTE: WE ARE NO LONGER ALLOWED TO SLEEP!
-	 */
-
-	/*
-	 * Move proc from allproc to zombproc, but do not yet
-	 * wake up the reaper.  We will put the proc on the
-	 * deadproc list later (using the p_dead member), and
-	 * wake up the reaper when we do.
-	 * Changing the state to SDEAD stops it being found by pfind().
+	 * Move proc from allproc to zombproc, it's now ready
+	 * to be collected by parent. Remaining lwp resources
+	 * will be freed in lwp_exit2() once we'd switch to idle
+	 * context.
+	 * Changing the state to SZOMB stops it being found by pfind().
 	 */
 	s = proclist_lock_write();
-	p->p_stat = SDEAD;
-	p->p_nrlwps--;
 	l->l_stat = LSDEAD;
 	LIST_REMOVE(p, p_list);
 	LIST_INSERT_HEAD(&zombproc, p, p_list);
 	LIST_REMOVE(l, l_list);
-	l->l_flag |= L_DETACHED;
+	l->l_flag |= L_DETACHED|L_PROCEXIT;
+	p->p_stat = SZOMB;
+	p->p_nrlwps--;
+	p->p_nlwps--;
 	proclist_unlock_write(s);
 
 	/*
@@ -417,6 +460,10 @@ exit1(struct lwp *l, int rv)
 		 */
 		if (LIST_FIRST(&pp->p_children) == NULL)
 			wakeup(pp);
+	} else {
+		if ((p->p_flag & P_FSTRACE) == 0 && p->p_exitsig != 0)
+			exit_psignal(p, p->p_pptr);
+		wakeup(p->p_pptr);
 	}
 
 	/*
@@ -442,18 +489,24 @@ exit1(struct lwp *l, int rv)
 	/* This process no longer needs to hold the kernel lock. */
 	KERNEL_PROC_UNLOCK(l);
 
+#ifdef DEBUG
+	/* Nothing should use the process link anymore */
+	if (l->l_flag & L_PROCEXIT)
+		l->l_proc = NULL;
+#endif
+
 	/*
 	 * Finally, call machine-dependent code to switch to a new
 	 * context (possibly the idle context).  Once we are no longer
-	 * using the dead process's vmspace and stack, exit2() will be
-	 * called to schedule those resources to be released by the
-	 * reaper thread.
+	 * using the dead lwp's stack, lwp_exit2() will be called
+	 * to arrange for the resources to be released.
 	 *
 	 * Note that cpu_exit() will end with a call equivalent to
 	 * cpu_switch(), finishing our execution (pun intended).
 	 */
 
-	cpu_exit(l, 1);
+	uvmexp.swtch++;
+	cpu_exit(l);
 }
 
 void
@@ -533,123 +586,6 @@ lwp_exit_hook(struct lwp *l, void *arg)
 	lwp_exit(l);
 }
 
-/*
- * We are called from cpu_exit() once it is safe to schedule the
- * dead process's resources to be freed (i.e., once we've switched to
- * the idle PCB for the current CPU).
- *
- * NOTE: One must be careful with locking in this routine.  It's
- * called from a critical section in machine-dependent code, so
- * we should refrain from changing any interrupt state.
- *
- * We lock the deadproc list (a spin lock), place the proc on that
- * list (using the p_dead member), and wake up the reaper.
- */
-void
-exit2(struct lwp *l)
-{
-	struct proc *p = l->l_proc;
-
-	simple_lock(&deadproc_slock);
-	SLIST_INSERT_HEAD(&deadprocs, p, p_dead);
-	simple_unlock(&deadproc_slock);
-
-	/* lwp_exit2() will wake up deadproc for us. */
-	lwp_exit2(l);
-}
-
-/*
- * Process reaper.  This is run by a kernel thread to free the resources
- * of a dead process.  Once the resources are free, the process becomes
- * a zombie, and the parent is allowed to read the undead's status.
- */
-void
-reaper(void *arg)
-{
-	struct proc *p, *parent;
-	struct lwp *l;
-
-	KERNEL_PROC_UNLOCK(curlwp);
-
-	for (;;) {
-		simple_lock(&deadproc_slock);
-		p = SLIST_FIRST(&deadprocs);
-		l = LIST_FIRST(&deadlwp);
-		if (p == NULL && l == NULL) {
-			/* No work for us; go to sleep until someone exits. */
-			(void) ltsleep(&deadprocs, PVM|PNORELOCK,
-			    "reaper", 0, &deadproc_slock);
-			continue;
-		}
-
-		if (l != NULL ) {
-			p = l->l_proc;
-
-			/* Remove lwp from the deadlwp list. */
-			LIST_REMOVE(l, l_list);
-			simple_unlock(&deadproc_slock);
-			KERNEL_PROC_LOCK(curlwp);
-			
-			/*
-			 * Give machine-dependent code a chance to free any
-			 * resources it couldn't free while still running on
-			 * that process's context.  This must be done before
-			 * uvm_lwp_exit(), in case these resources are in the 
-			 * PCB.
-			 */
-			cpu_wait(l);
-
-			/*
-			 * Free the VM resources we're still holding on to.
-			 */
-			uvm_lwp_exit(l);
-
-			l->l_stat = LSZOMB;
-			if (l->l_flag & L_DETACHED) {
-				/* Nobody waits for detached LWPs. */
-				LIST_REMOVE(l, l_sibling);
-				p->p_nlwps--;
-				pool_put(&lwp_pool, l);
-			} else {
-				p->p_nzlwps++;
-				wakeup(&p->p_nlwps);
-			}
-			/* XXXNJW where should this be with respect to 
-			 * the wakeup() above? */
-			KERNEL_PROC_UNLOCK(curlwp);
-		} else {
-			/* Remove proc from the deadproc list. */
-			SLIST_REMOVE_HEAD(&deadprocs, p_dead);
-			simple_unlock(&deadproc_slock);
-			KERNEL_PROC_LOCK(curlwp);
-
-			/*
-			 * Free the VM resources we're still holding on to.
-			 * We must do this from a valid thread because doing
-			 * so may block.
-			 */
-			uvm_proc_exit(p);
-			
-			/* Process is now a true zombie. */
-			p->p_stat = SZOMB;
-			parent = p->p_pptr;
-			parent->p_nstopchild++;
-			if (LIST_FIRST(&parent->p_children) != p) {
-				/* Put child where it can be found quickly */
-				LIST_REMOVE(p, p_sibling);
-				LIST_INSERT_HEAD(&parent->p_children,
-						p, p_sibling);
-			}
-			
-			/* Wake up the parent so it can get exit status. */
-			if ((p->p_flag & P_FSTRACE) == 0 && p->p_exitsig != 0)
-				exit_psignal(p, p->p_pptr);
-			KERNEL_PROC_UNLOCK(curlwp);
-			wakeup(p->p_pptr);
-		}
-	}
-}
-
 int
 sys_wait4(struct lwp *l, void *v, register_t *retval)
 {
@@ -677,6 +613,11 @@ sys_wait4(struct lwp *l, void *v, regist
 		*retval = 0;
 		return 0;
 	}
+
+	/*
+	 * Collect child u-areas.
+	 */
+	uvm_uarea_drain(FALSE);
 
 	retval[0] = child->p_pid;
 
Index: kern/kern_lwp.c
===================================================================
RCS file: /cvsroot/src/sys/kern/kern_lwp.c,v
retrieving revision 1.15
diff -u -p -r1.15 kern_lwp.c
--- kern/kern_lwp.c	4 Nov 2003 10:33:15 -0000	1.15
+++ kern/kern_lwp.c	5 Dec 2003 21:27:08 -0000
@@ -57,8 +57,6 @@ __KERNEL_RCSID(0, "$NetBSD: kern_lwp.c,v
 #include <uvm/uvm_extern.h>
 
 struct lwplist alllwp;
-struct lwplist deadlwp;
-struct lwplist zomblwp;
 
 #define LWP_DEBUG
 
@@ -369,9 +367,9 @@ lwp_wait1(struct lwp *l, lwpid_t lid, lw
 
 	struct proc *p = l->l_proc;
 	struct lwp *l2, *l3;
-	int nfound, error, s, wpri;
-	static char waitstr1[] = "lwpwait";
-	static char waitstr2[] = "lwpwait2";
+	int nfound, error, wpri;
+	static const char waitstr1[] = "lwpwait";
+	static const char waitstr2[] = "lwpwait2";
 
 	DPRINTF(("lwp_wait1: %d.%d waiting for %d.\n",
 	    p->p_pid, l->l_lid, lid));
@@ -393,10 +391,6 @@ lwp_wait1(struct lwp *l, lwpid_t lid, lw
 			if (departed)
 				*departed = l2->l_lid;
 
-			s = proclist_lock_write();
-			LIST_REMOVE(l2, l_zlist); /* off zomblwp */
-			proclist_unlock_write(s);
-
 			simple_lock(&p->p_lock);
 			LIST_REMOVE(l2, l_sibling);
 			p->p_nlwps--;
@@ -544,17 +538,18 @@ lwp_exit(struct lwp *l)
 		DPRINTF(("lwp_exit: %d.%d calling exit1()\n",
 		    p->p_pid, l->l_lid));
 		exit1(l, 0);
+		/* NOTREACHED */
 	}
 
 	s = proclist_lock_write();
 	LIST_REMOVE(l, l_list);
-	if ((l->l_flag & L_DETACHED) == 0) {
-		DPRINTF(("lwp_exit: %d.%d going on zombie list\n", p->p_pid,
-		    l->l_lid));
-		LIST_INSERT_HEAD(&zomblwp, l, l_zlist);
-	}
 	proclist_unlock_write(s);
 
+	/* Free MD LWP resources */
+#ifndef __NO_CPU_LWP_FREE
+	cpu_lwp_free(l, 0);
+#endif
+
 	simple_lock(&p->p_lock);
 	p->p_nrlwps--;
 	simple_unlock(&p->p_lock);
@@ -565,20 +560,45 @@ lwp_exit(struct lwp *l)
 	KERNEL_PROC_UNLOCK(l);
 
 	/* cpu_exit() will not return */
-	cpu_exit(l, 0);
+	cpu_exit(l);
 
 }
 
-
+/*
+ * We are called from cpu_exit() once it is safe to schedule the
+ * dead process's resources to be freed (i.e., once we've switched to
+ * the idle PCB for the current CPU).
+ *
+ * NOTE: One must be careful with locking in this routine.  It's
+ * called from a critical section in machine-dependent code, so
+ * we should refrain from changing any interrupt state.
+ */
 void
 lwp_exit2(struct lwp *l)
 {
+	struct proc *p;
 
-	simple_lock(&deadproc_slock);
-	LIST_INSERT_HEAD(&deadlwp, l, l_list);
-	simple_unlock(&deadproc_slock);
+	/*
+	 * Free the VM resources we're still holding on to.
+	 */
+	uvm_lwp_exit(l);
+
+	l->l_stat = LSZOMB;
+	if (l->l_flag & L_DETACHED) {
+		/* Nobody waits for detached LWPs. */
+		LIST_REMOVE(l, l_sibling);
+
+		if ((l->l_flag & L_PROCEXIT) == 0) {
+			p = l->l_proc;
+			p->p_nlwps--;
+		}
 
-	wakeup(&deadprocs);
+		pool_put(&lwp_pool, l);
+	} else {
+		p = l->l_proc;
+		p->p_nzlwps++;
+		wakeup(&p->p_nlwps);
+	}
 }
 
 /*
Index: kern/kern_proc.c
===================================================================
RCS file: /cvsroot/src/sys/kern/kern_proc.c,v
retrieving revision 1.69
diff -u -p -r1.69 kern_proc.c
--- kern/kern_proc.c	17 Nov 2003 22:52:09 -0000	1.69
+++ kern/kern_proc.c	5 Dec 2003 21:27:08 -0000
@@ -134,17 +134,6 @@ struct proclist zombproc;	/* resources h
 struct lock proclist_lock;
 
 /*
- * List of processes that has called exit, but need to be reaped.
- * Locking of this proclist is special; it's accessed in a
- * critical section of process exit, and thus locking it can't
- * modify interrupt state.
- * We use a simple spin lock for this proclist.
- * Processes on this proclist are also on zombproc.
- */
-struct simplelock deadproc_slock;
-struct deadprocs deadprocs = SLIST_HEAD_INITIALIZER(deadprocs);
-
-/*
  * pid to proc lookup is done by indexing the pid_table array. 
  * Since pid numbers are only allocated when an empty slot
  * has been found, there is no need to search any lists ever.
@@ -229,8 +218,6 @@ procinit(void)
 
 	spinlockinit(&proclist_lock, "proclk", 0);
 
-	simple_lock_init(&deadproc_slock);
-
 	pid_table = malloc(INITIAL_PID_TABLE_SIZE * sizeof *pid_table,
 			    M_PROC, M_WAITOK);
 	/* Set free list running through table...
@@ -249,8 +236,6 @@ procinit(void)
 #undef LINK_EMPTY
 
 	LIST_INIT(&alllwp);
-	LIST_INIT(&deadlwp);
-	LIST_INIT(&zomblwp);
 
 	uihashtbl =
 	    hashinit(maxproc / 16, HASH_LIST, M_PROC, M_WAITOK, &uihash);
Index: sys/lwp.h
===================================================================
RCS file: /cvsroot/src/sys/sys/lwp.h,v
retrieving revision 1.14
diff -u -p -r1.14 lwp.h
--- sys/lwp.h	17 Nov 2003 22:52:09 -0000	1.14
+++ sys/lwp.h	5 Dec 2003 21:27:08 -0000
@@ -100,8 +100,6 @@ struct	lwp {
 LIST_HEAD(lwplist, lwp);		/* a list of LWPs */
 
 extern struct lwplist alllwp;		/* List of all LWPs. */
-extern struct lwplist deadlwp;		/* */
-extern struct lwplist zomblwp;
 
 extern struct pool lwp_pool;		/* memory pool for LWPs */
 extern struct pool lwp_uc_pool;		/* memory pool for LWP startup args */
@@ -113,6 +111,7 @@ extern struct lwp lwp0;			/* LWP for pro
 #define	L_SELECT	0x00040	/* Selecting; wakeup/waiting danger. */
 #define	L_SINTR		0x00080	/* Sleep is interruptible. */
 #define	L_TIMEOUT	0x00400	/* Timing out during sleep. */
+#define	L_PROCEXIT	0x00800 /* In process exit, l_proc no longer valid */
 #define	L_BIGLOCK	0x80000	/* LWP needs kernel "big lock" to run */
 #define	L_SA		0x100000 /* Scheduler activations LWP */
 #define	L_SA_UPCALL	0x200000 /* SA upcall is pending */
Index: sys/proc.h
===================================================================
RCS file: /cvsroot/src/sys/sys/proc.h,v
retrieving revision 1.181
diff -u -p -r1.181 proc.h
--- sys/proc.h	5 Dec 2003 21:12:44 -0000	1.181
+++ sys/proc.h	5 Dec 2003 21:27:08 -0000
@@ -174,7 +174,7 @@ struct proc {
 	char		p_pad1[3];
 
 	pid_t		p_pid;		/* Process identifier. */
-	SLIST_ENTRY(proc) p_dead;	/* Processes waiting for reaper */
+	SLIST_ENTRY(proc) p_nu3;	/* unused: was link to deadproc list */
 	LIST_ENTRY(proc) p_pglist;	/* l: List of processes in pgrp. */
 	struct proc 	*p_pptr;	/* l: Pointer to parent process. */
 	LIST_ENTRY(proc) p_sibling;	/* l: List of sibling processes. */
@@ -446,9 +446,7 @@ int	ltsleep(const void *, int, const cha
 	    __volatile struct simplelock *);
 void	wakeup(const void *);
 void	wakeup_one(const void *);
-void	reaper(void *);
 void	exit1(struct lwp *, int);
-void	exit2(struct lwp *);
 int	find_stopped_child(struct proc *, pid_t, int, struct proc **);
 struct proc *proc_alloc(void);
 void	proc0_insert(struct proc *, struct lwp *, struct pgrp *, struct session *);
@@ -463,15 +461,10 @@ int	pgid_in_session(struct proc *, pid_t
 #ifndef cpu_idle
 void	cpu_idle(void);
 #endif
-void	cpu_exit(struct lwp *, int);
+void	cpu_exit(struct lwp *);
 void	cpu_lwp_fork(struct lwp *, struct lwp *, void *, size_t,
 	    void (*)(void *), void *);
-
-		/*
-		 * XXX: use __P() to allow ports to have as a #define.
-		 * XXX: we need a better way to solve this.
-		 */
-void	cpu_wait __P((struct lwp *));
+void	cpu_lwp_free(struct lwp *, int);
 
 void	child_return(void *);
 
-- 
Jaromir Dolecek <jdolecek@NetBSD.org>            http://www.NetBSD.cz/
-=- We should be mindful of the potential goal, but as the Buddhist -=-
-=- masters say, ``You may notice during meditation that you        -=-
-=- sometimes levitate or glow.   Do not let this distract you.''   -=-