NetBSD-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

How to limit amount of virtual memory used for files (was: Re: Tuning ZFS memory usage on NetBSD - call for advice)



Hello all,

Am 31.08.2022 um 21:57 schrieb Lloyd Parkes:
It might not be ZFS related. But it could be.

Someone else reported excessive, ongoing, increasing "File" usage a while back and I was somewhat dismissive because they were running a truckload of apps at the same time (not in VMs).

I did manage to reproduce his problem on an empty non-ZFS NetBSD system, so there is definitely something going on where "File" pages are not getting reclaimed when there is pressure on the memory system.

I haven't got around to looking into it any deeper though.

BTW the test was to copy a single large file (>1TB?) from SATA storage to USB storage. Since the file is held open for the duration of the copy (I used dd IIRC) this might end up exercising many of the same code paths as a VM accessing a disk image.

Cheers,
Lloyd


I think I had answered this earlier (but I'm not quite sure) - the problem only occurs when I write the data obtained with "zfs send" to a local file system (e.g. the USB HDD). If I send the data to a remote system with netcat instead, the "file usage" remains within the green range. I can therefore confirm your test case and am now pretty sure that ZFS is not the culprit here.

In general, I have found relatively little about how exactly the memory under "File" is composed. The man page for top does not contain any information on this. I got a bit of an idea in [1], so I'm going to make a few assumptions. If anyone could confirm or contradict these, I would be very grateful.

Accordingly, the memory shown under "File" could be areas of files in the file system mapped into the main memory. As a consequence, my massive writing to a large file probably leads to the data first being "parked" in memory pages of the main memory and then gradually being written to the backing storage (hard disk). Since "zfs send" can read the data from a SATA SSD much faster than it can be written to the slow USB HDD, the main memory is utilised to the maximum for the duration of the process.

This raises the question for me: can I somehow limit this kind of memory use on a process basis? Could ulimit help here?

```
vhost$ ulimit -a
time(cpu-seconds)    unlimited
file(blocks)         unlimited
coredump(blocks)     unlimited
data(kbytes)         262144
stack(kbytes)        4096
lockedmem(kbytes)    2565180
memory(kbytes)       7695540
nofiles(descriptors) 1024
processes            1024
threads              1024
vmemory(kbytes)      unlimited
sbsize(bytes)        unlimited
vhost$
```

Unfortunately, I have not found any more detailed descriptions of the above parameters. At least "file(blocks)" reads promisingly...


Kind regards
Matthias

[1] https://www.netbsd.org/docs/internals/en/chap-memory.html


Home | Main Index | Thread Index | Old Index