NetBSD-Bugs archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: kern/57558: pgdaemon 100% busy - no scanning (ZFS case)
The following reply was made to PR kern/57558; it has been noted by GNATS.
From: Frank Kardel <kardel%netbsd.org@localhost>
To: gnats-bugs%netbsd.org@localhost
Cc:
Subject: Re: kern/57558: pgdaemon 100% busy - no scanning (ZFS case)
Date: Sun, 5 May 2024 09:47:51 +0200
Good to have those knobs.
In you example you have 10922 target free pages which is ~45Mbytes. On a
system with more then
450Mb memory the problem would occur again.
On our system i see (vmstat -s)
4096 bytes per page
16 page colors
86872544 pages managed
4266905 pages free
18512074 pages active
4929394 pages inactive
0 pages paging
4429 pages wired
1 reserve pagedaemon pages
60 reserve kernel pages
2714934 boot kernel pages
58193232 kernel pool pages
6295559 anonymous pages
17136740 cached file pages
13700 cached executable pages
3072 minimum free pages
4096 target free pages
28957514 maximum wired pages
1 swap devices
1048575 swap pages
201179 swap pages in use
2458785 swap allocations
Thus
356 Gb managed memory
17Gb free
26 Gb anonymous memory
70Gb file pages
16 Mb free target
238 Gb allocated to pool
This situation is with the 10% correction in ZFS and has survived
(without stalls/allocation failure) creating a 1.6Tb database in ZFS and
two parallel multiple
hour production like runs on that database.
Without the fix the system would have stalled during load.
Your setup would be safe with the current implementation if
zfs_arc.free_target(= uvmexo.freetarg) would be at 10% of KVA memory..
This is usually not the case.
I don't know how much memory you system has and I did not check how
uvmexp.freetarg is calculated at startup or adjusted thereafter. Fact is
that 16Mb
even seems sufficient on large memory systems if ZFS is kept from
allocating pool memory beyond 90%. ZFS must start reclaiming when there
is less than
10% free KVA available due to the page daemon logic.
I see your patch also contains my proposed fix. So when testing this
patch we should be safe from the KVA starvation issue and changing
zfs_arc_free_target would only make an additional effect when it is set
higher then 10% KVA memory.. Being able to set this value has the
benefit that we can limit ZFS pool usage even more. Maybe we should provide
a way to specify %KVA or an absolute allocation value.
-Frank
Home |
Main Index |
Thread Index |
Old Index