NetBSD-Bugs archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: kern/45718 (processes sometimes get stuck and spin in vm_map)



The following reply was made to PR kern/45718; it has been noted by GNATS.

From: Greg Oster <oster%cs.usask.ca@localhost>
To: matthew green <mrg%eterna.com.au@localhost>
Cc: Taylor R Campbell <campbell+netbsd%mumble.net@localhost>,
 kern-bug-people%netbsd.org@localhost, netbsd-bugs%netbsd.org@localhost, 
gnats-admin%netbsd.org@localhost,
 rmind%NetBSD.org@localhost, gnats-bugs%NetBSD.org@localhost, 
oster%netbsd.org@localhost,
 martin%netbsd.org@localhost
Subject: Re: kern/45718 (processes sometimes get stuck and spin in vm_map)
Date: Sat, 14 Apr 2012 22:55:25 -0600

 On Sun, 15 Apr 2012 05:14:06 +1000
 matthew green <mrg%eterna.com.au@localhost> wrote:
 
 > 
 > > I caught some processes wedged in vm_map, although they don't seem
 > > to be spinning -- just wedged.  ps(1) reports these processes as
 > > zombies; top(1) does not report their taking any CPU time.  One is
 > > a subprocess of newsyslog; the other is a subprocess of git-pull.
 > > 
 > > crash> t/a ca73daa0
 > > trace: pid 25970 lid 1 at 0xdc10781c
 > > sleepq_block(64,0,c0c5fd9d,c0ce63d0,0,0,0,c08e40d3,0,0) at
 > > sleepq_block+0xda
 > > cv_timedwait(c0d291d0,c0d291cc,64,dc1078e8,c0d28fe0,ffffffff,ffffffff,0,801727,c66ef880)
 > > at cv_timedwait+0x126
 > > uvm_map_prepare(c0d291c0,c0000000,40000,c0d28fe0,ffffffff,ffffffff,0,801727,dc107928,dc107b54)
 > > at uvm_map_prepare+0x167
 > > uvm_map(c0d291c0,dc1079b0,40000,c0d28fe0,ffffffff,ffffffff,0,801727,800002,0)
 > > at uvm_map+0x78
 > > uvm_km_alloc(c0d291c0,40000,0,800002,c0d15ea0,1,dc107a4c,c07e117a,c0d15ea0,1)
 > > at uvm_km_alloc+0xe6
 > > exec_pool_alloc(c0d15ea0,1,dc107a0c,c09548fd,0,0,0,0,0,0) at
 > > exec_pool_alloc+0x2b
 > > pool_grow(c0d15f14,1,c385ac12,0,0,c054,ca5ff000,c385ac09,9,c0d15f18)
 > > at pool_grow+0x2a
 > > pool_get(c0d15ea0,1,ce4e4a80,c098bb91,0,c31eec40,dc107adc,c056764b,c3137d00,c3137bc0)
 > > at pool_get+0x79
 > > execve_loadvm(8063f44,c055c480,dc107b3c,c3137d00,ca73daa0,0,8063f1c,c3ec4400,c4519400,c380e000)
 > > at execve_loadvm+0x1da
 > > execve1(ca73daa0,8063f1c,8063f3c,8063f44,c055c480,6300,dc107d1c,0,c06ba833,cd722744)
 > > at execve1+0x32
 > > sys_execve(ca73daa0,dc107cf4,dc107d1c,c08096f0,0,cdb17e3c,c0c8b800,dc107d30,c06babb9,cd722730)
 > > at sys_execve+0x30
 > > syscall(dc107d48,b3,ab,bfbf001f,806001f,8063f1c,8063f3c,bfbfec48,8063f1c,7d7b7cff)
 > > at syscall+0x95
 > 
 > this looks like the execargs pool leak that greg oster has a patch
 > for. (reproduced below.)
 > 
 > 
 > .mrg.
 > 
 > Index: kern_exec.c
 > ===================================================================
 > RCS file: /cvsroot/src/sys/kern/kern_exec.c,v
 > retrieving revision 1.349
 > diff -u -p -r1.349 kern_exec.c
 > --- kern_exec.c      9 Apr 2012 19:42:06 -0000       1.349
 > +++ kern_exec.c      13 Apr 2012 20:28:14 -0000
 > @@ -1991,6 +1991,8 @@ spawn_return(void *arg)
 >              rw_exit(&exec_lock);
 >      }
 >  
 > +    execve_free_data(&spawn_data->sed_exec);
 > +
 >      /* release our refcount on the data */
 >      spawn_exec_data_release(spawn_data);
 >  
 
 Actually, info from Martin leads me to believe that the
 execve_free_data() needs to go into the: 
 
  if (have_reflock) {
 
  }
 
 bit just above here... I've run tests with that, and they all pass
 too...
 
 Later...
 
 Greg Oster
 


Home | Main Index | Thread Index | Old Index