NetBSD-Bugs archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
kern/38927: processes getting stuck in uvm_map (cv_timedwait), hanging machine
>Number: 38927
>Category: kern
>Synopsis: processes getting stuck in uvm_map (cv_timedwait)
>Confidential: no
>Severity: serious
>Priority: medium
>Responsible: kern-bug-people
>State: open
>Class: sw-bug
>Submitter-Id: net
>Arrival-Date: Tue Jun 10 04:05:00 +0000 2008
>Originator: Geoff C. Wing
>Release: NetBSD 4.99.64
>Organization:
>Environment:
System: NetBSD g.primenet.com.au 4.99.64 NetBSD 4.99.64 (G) #0: Tue Jun 10
10:32:35 EST 2008
gcw%g.primenet.com.au@localhost:/usr/netbsd/src/sys/arch/i386/compile/G i386
Architecture: i386
Machine: i386
>Description:
On an MP i386 (Core2Duo), a heavily loaded machine will
quickly have multiple processes getting stuck in uvm_map.
This has been happening around a month for me.
Running /etc/security always does it for me with "find" getting
stuck (first?) then most other things waiting.
A backtrace of "find" on the running system:
#0 0xc02c789e in mi_switch (l=0xd142bc40) at ../../../../kern/kern_synch.c:745
#1 0xc02c408d in sleepq_block (timo=100, catch=false) at
../../../../kern/kern_sleepq.c:254
#2 0xc02a274a in cv_timedwait (cv=0xc05befb4, mtx=0xc05befb0, timo=100) at
../../../../kern/kern_condvar.c:243
#3 0xc02694de in uvm_map_prepare (map=0xc05befa0, start=3221225472,
size=131072, uobj=0x0, uoffset=-1, align=131072, flags=25171751,
args=0xd12cf7c0)
at ../../../../uvm/uvm_map.c:1185
#4 0xc026d4c6 in uvm_map (map=0xc05befa0, startp=0xd12cf824, size=131072,
uobj=0x0, uoffset=-1, align=131072, flags=25171751)
at ../../../../uvm/uvm_map.c:1084
#5 0xc0264a64 in km_vacache_alloc (pp=0xc05bf068, flags=1) at
../../../../uvm/uvm_km.c:186
#6 0xc02de6b6 in pool_grow (pp=0xc05bf068, flags=1) at
../../../../kern/subr_pool.c:2816
#7 0xc02de004 in pool_get (pp=0xc05bf068, flags=1) at
../../../../kern/subr_pool.c:1072
#8 0xc0265483 in uvm_km_alloc_poolpage_cache (map=0xc05befa0, waitok=true) at
../../../../uvm/uvm_km.c:695
#9 0xc02de6b6 in pool_grow (pp=0xcc045000, flags=1) at
../../../../kern/subr_pool.c:2816
#10 0xc02de004 in pool_get (pp=0xcc045000, flags=1) at
../../../../kern/subr_pool.c:1072
#11 0xc02e0426 in pool_cache_get_slow (cc=<value optimized out>, s=0xd12cf9a8,
objectp=0xd12cf9ac, pap=0x0, flags=1) at ../../../../kern/subr_pool.c:2456
#12 0xc02e0c23 in pool_cache_get_paddr (pc=0xcc045000, flags=1, pap=0x0) at
../../../../kern/subr_pool.c:2538
#13 0xc0315f53 in cache_enter (dvp=0xffbfe3a4, vp=0xd10112ec, cnp=0xd12cfc28)
at ../../../../kern/vfs_cache.c:585
#14 0xc025546f in ufs_lookup (v=0xd12cfae4) at
../../../../ufs/ufs/ufs_lookup.c:637
#15 0xc0328414 in VOP_LOOKUP (dvp=0xffbfe3a4, vpp=0xd12cfc14, cnp=0xd12cfc28)
at ../../../../kern/vnode_if.c:131
#16 0xc0318364 in lookup (ndp=0xd12cfc00) at ../../../../kern/vfs_lookup.c:696
#17 0xc0318b2f in namei (ndp=0xd12cfc00) at ../../../../kern/vfs_lookup.c:332
#18 0xc0320768 in do_sys_stat (path=0xbb92be8c <Error reading address
0xbb92be8c: Bad address>, nd_flags=0, sb=0xd12cfc70)
at ../../../../kern/vfs_syscalls.c:2443
#19 0xc03207b9 in sys___lstat30 (l=0xd142bc40, uap=0xd12cfd00,
retval=0xd12cfd28) at ../../../../kern/vfs_syscalls.c:2485
#20 0xc0374c32 in syscall (frame=0xd12cfd48) at
../../../../arch/i386/i386/syscall.c:102
#21 0xc010055d in syscall1 ()
Similarly, a bt of one of my shells:
#0 0xc02c789e in mi_switch (l=0xd142b060) at ../../../../kern/kern_synch.c:745
#1 0xc02c408d in sleepq_block (timo=100, catch=false) at
../../../../kern/kern_sleepq.c:254
#2 0xc02a274a in cv_timedwait (cv=0xc05befb4, mtx=0xc05befb0, timo=100) at
../../../../kern/kern_condvar.c:243
#3 0xc02694de in uvm_map_prepare (map=0xc05befa0, start=3221225472,
size=131072, uobj=0x0, uoffset=-1, align=131072, flags=25171751,
args=0xd144f980)
at ../../../../uvm/uvm_map.c:1185
#4 0xc026d4c6 in uvm_map (map=0xc05befa0, startp=0xd144f9e4, size=131072,
uobj=0x0, uoffset=-1, align=131072, flags=25171751)
at ../../../../uvm/uvm_map.c:1084
#5 0xc0264a64 in km_vacache_alloc (pp=0xc05bf068, flags=1) at
../../../../uvm/uvm_km.c:186
#6 0xc02de6b6 in pool_grow (pp=0xc05bf068, flags=1) at
../../../../kern/subr_pool.c:2816
#7 0xc02de004 in pool_get (pp=0xc05bf068, flags=1) at
../../../../kern/subr_pool.c:1072
#8 0xc0265483 in uvm_km_alloc_poolpage_cache (map=0xc05befa0, waitok=true) at
../../../../uvm/uvm_km.c:695
#9 0xc02de6b6 in pool_grow (pp=0xc05c5180, flags=1) at
../../../../kern/subr_pool.c:2816
#10 0xc02de004 in pool_get (pp=0xc05c5180, flags=1) at
../../../../kern/subr_pool.c:1072
#11 0xc02e0426 in pool_cache_get_slow (cc=<value optimized out>, s=0xd144fb68,
objectp=0xd144fb6c, pap=0x0, flags=1) at ../../../../kern/subr_pool.c:2456
#12 0xc02e0c23 in pool_cache_get_paddr (pc=0xc05c5180, flags=1, pap=0x0) at
../../../../kern/subr_pool.c:2538
#13 0xc035f150 in pmap_create () at ../../../../arch/x86/x86/pmap.c:2169
#14 0xc0267ac5 in uvmspace_init (vm=0xd0fcd25c, pmap=0x0, vmin=0,
vmax=3217031168) at ../../../../uvm/uvm_map.c:3952
#15 0xc0267af7 in uvmspace_alloc (vmin=0, vmax=3217031168) at
../../../../uvm/uvm_map.c:3927
#16 0xc026ae74 in uvmspace_fork (vm1=0xd11167d8) at
../../../../uvm/uvm_map.c:4169
#17 0xc0263da1 in uvm_proc_fork (p1=0xd142fa4c, p2=0xd4650018, shared=false) at
../../../../uvm/uvm_glue.c:208
#18 0xc02ac227 in fork1 (l1=0xd142b060, flags=<value optimized out>,
exitsig=20, stack=0x0, stacksize=0, func=0, arg=0x0, retval=0xd144fd28,
rnewprocp=0x0)
at ../../../../kern/kern_fork.c:426
#19 0xc02ac881 in sys_fork (l=0xd142b060, v=0xd144fd00, retval=0xd144fd28) at
../../../../kern/kern_fork.c:110
#20 0xc0374c32 in syscall (frame=0xd144fd48) at
../../../../arch/i386/i386/syscall.c:102
#21 0xc010055d in syscall1 ()
Which, if any, of the structures here is useful to examine?
>How-To-Repeat:
.
>Fix:
?
Home |
Main Index |
Thread Index |
Old Index