tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: deadlock on flt_noram5
YAMAMOTO Takashi <yamt%mwd.biglobe.ne.jp@localhost> wrote:
> "show uvm" might be interesting.
This time I got processes in flt_noram1 instead of flt_noram5. Here is show uvm
output:
db> show uvm
Current UVM status:
pagesize=4096 (0x1000), pagemask=0xfff, pageshift=12
63217 VM pages: 37312 active, 18283 inactive, 3084 wired, 1 free
pages 35865 anon, 18751 file, 3808 exec
freemin=256, free-target=341, wired-max=21072
faults=920374, traps=810647, intrs=7767403, ctxswitch=12850739
softint=16186151, syscalls=897784267, swapins=23, swapouts=40
fault counts:
noram=4, noanon=0, pgwait=0, pgrele=0
ok relocks(total)=4706(4711), anget(retrys)=311116(1693), amapcopy=96542
neighbor anon/obj pg=119132/1323748, gets(lock/unlock)=336489/3015
cases: anon=220693, anoncow=36623, obj=275901, prcopy=60586, przero=315025
daemon and swap counts:
woke=47, revs=33, scans=53717, obscans=81, anscans=33755
busy=0, freed=32656, reactivate=15379, deactivate=87720
pageouts=4244, pending=29740, nswget=1702
nswapdev=2, swpgavail=98303
swpages=98303, swpginuse=33791, swpgonly=30938, paging=1180
There are other oddities, for instance ntpd using PUFFS. I did not expect that.
PID LID S CPU FLAGS STRUCT LWP * NAME WAIT
244 1 3 0 1000084 cb007ac0 ntpd puffsrpl
bt shows this backtrace. I do not understand how I get there, and my attempts to
get the file path gave faults in ddb.
sleepq_block
cv_wait_sig
puffs_msg_wait
puffs_msg_wait2
puffs_vnop_inactive
VOP_INACTIVE
vclean
getcleanvnode
getnewvnode
ffs_vget
ffs_valloc
ufs_makeinode
ufs_create
VOP_CREATE
vn_open
sys_open
syscall
Process 0 is also in a strange state:
PID LID S CPU FLAGS STRUCT LWP * NAME WAIT
0 33 2 0 204 cac4f080 swapiod
32 3 0 204 caf1b0a0 puffsop puffsop
31 3 0 204 caf1b820 puffsop puffsop
30 3 0 204 cac4fd00 physiod physiod
29 3 0 204 c9d717c0 aiodoned aiodoned
28 3 0 284 c9d71540 ioflush puffsrpl
Backtrace for ioflush is below. it might have to do with my (not yet committed)
experiments to work around the race condition between puffs_vfsop_sync and
puffs_vfsop_getattr ("zero-filed page on VOP_PUTPAGES"). Now that it uses
synchronous PUFFS operations, I wonder if there could be a deadlock between
ioflush and perfused/glusterfsd
sleepq_block
cv_wait_sig
puffs_msg_wait
puffs_vfsop_sync
sync_fsync
VOP_FSYNC
sched_sync
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
manu%netbsd.org@localhost
Home |
Main Index |
Thread Index |
Old Index