Current-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: High vm scan rate and dropped keystrokes thru X?



On Tue, Jul 27, 2021 at 06:28:39PM +1200, Lloyd Parkes wrote:
> 
> 
> On 27/07/21 12:19 am, Paul Ripke wrote:
> > On Mon, Jul 26, 2021 at 05:53:19PM +1200, Lloyd Parkes wrote:
> > > That's 12GB of RAM in use and 86MB of RAM free. Sounds pretty awful to me.
> > 
> > Sounds normal to me - I don't expect to see any free RAM unless I've just
> > - exited a large process
> > - deleted a large file with large cache footprint
> > - released a large chunk of RAM by other means (mmap, madvise, semctl, etc).
> 
> I haven't run NetBSD on a desktop for a while now, but I still think 12GB is
> a lot of memory in use. Maybe I'll get a new MacBook when they start
> shipping 32GB Apple CPU ones and then put NetBSD on my current MacBook.

There's a bunch of junk running. 3 java processes for 3GiB, mongodb,
postgres, apache, firefox, prusa slicer, and it runs as the local network
router/proxy with all the usual junk running. I also run pkgsrc builds and
netbsd builds, and it handles all that fine.

> > A big chunk of it is in file cache, which is unsurprising when reading
> > thru a 400GiB file...
> 
> Page activity lasts 20s and at 30MB/s that means you should have 600MB of
> file data active. Add 50% for inactive pages and that's still only 900MB.
> I'm willing to bet money that zstd only reads each block of data once
> (sequentially in fact) and so it doesn't need any file data cache at all.
> File metadata is a different matter, but that probably stays active and
> there won't be much of it.

Yes, it's just cache churn due to sequential read I/O. I can cat the file
thru zstd with the same effect. I can even cat the file to /dev/null with
the same issue. Yes, the file data cache is pure cost in this case.

> I suspect that your vm.filemax is set to more memory than you have available
> for the file cache and once that happens anonymous pages start to get
> swapped out. My experience is that while anonymous pages sound unimportant,
> they are in fact the most important pages to keep in RAM. Thinking about it,
> they are the irreplaceable bits of all our running software.
> 
> Try setting vm.filemin=5 and vm.filemax=10. Really. I did it when processing
> vast amounts of files in CVS and it worked for me.

I would agree, except there's basically zero paging activity for the entire
duration. I tried this anyway, and there's no change in behaviour,
whatsoever.

> Out of curiosity, what are you doing with zstd. You mentioned backups. Is
> this dump or restore? dump implements its own file cache, which won;t help
> with the memory burden.

I just do compressed dumps to an external drive. Doing the dump is fine,
but just reading it back leads to bad performance when the page daemon
goes nuts.

> "top -ores" will tell you what programs are using the most anonymous pages,
> which might help identify where all this memory pressure is coming from.

I know these, but there is no real memory pressure. It's just that normally
the page daemon scans and frees the same number of pages, but for some
reason, at some point, it starts scanning 1M+ pages without freeing any.

-- 
Paul Ripke
"Great minds discuss ideas, average minds discuss events, small minds
 discuss people."
-- Disputed: Often attributed to Eleanor Roosevelt. 1948.


Home | Main Index | Thread Index | Old Index