NetBSD-Bugs archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: kern/57558: pgdaemon 100% busy - no scanning (ZFS case)
Unfortunately the problem with the looping pagedaemon still persists in
NetBSD 10.0.
Observed behavior:
pagedaemon runs at 100% (alreads at 550 minutes CPU and counting)
The per-cpu stats syned rate (also used for free memory determination)
is at 3398340 sync/sec (impressive).
The kernel_arena is starved at less than 10% free. (used 323982908K,
total allocated 324234996K) thus the pagedaemon invokes the pooldrain
thread (as discussed before)
ZFS claims the majority of the pool memory used. But also lists much of
arc memory is potentially evictable.
ZFS has no interest in evicting/reclaiming data as arc_available_memory
returns 3225591808 bytes available (uvm_availmem(false) -
uvmexp.freetarg)*PAGESIZE
Environment: netbsd 10.0_STABLE GENERIC kerne as pvh xen guest.
sysctl memory info
hw.physmem64 = 367001600000
hw.usermem64 = 366983368704
vmstat -s paging info
4096 bytes per page
16 page colors
86872544 pages managed
314911 pages free
2978457 pages active
1454595 pages inactive
0 pages paging
4451 pages wired
1 reserve pagedaemon pages
60 reserve kernel pages
2714934 boot kernel pages
81174685 kernel pool pages
3612763 anonymous pages
810100 cached file pages
14709 cached executable pages
3072 minimum free pages
4096 target free pages
28957514 maximum wired pages
1 swap devices
1048570 swap pages
233261 swap pages in use
102966 swap allocations
...
1 times daemon wokeup
50971 revolutions of the clock hand
233341 pages freed by daemon
1361984 pages scanned by daemon
233329 anonymous pages scanned by daemon
13 object pages scanned by daemon
707164 pages reactivated
0 pages found busy by daemon
218754 total pending pageouts
3396316 pages deactivated
104478566672 per-cpu stats synced
103570 anon pages possibly dirty
3509193 anon pages dirty
0 anon pages clean
0 file pages possibly dirty
0 file pages dirty
824809 file pages clean
...
Once uvm_availmem() falls below uvmexp.freetarg the pagedaemon unwedges
from its tight loop as ZFS finally gives up its stranglehold on pool
memory..
Why does still see available memory when KVA is already starving?
The free target is 4096 pages. There are 81174685 kernel pool
pages allocated (KVA space may be bigger) but at 10% of 81174685 we are
way bigger when in starved mode than the 4096 target pages for ZFS. This
seems a like a potential mismatch between the boundaries tested in te
pagedaemon and in ZFS.
Thoughs?
Home |
Main Index |
Thread Index |
Old Index