Hi, Am 20.09.2022 um 08:27 schrieb RVP:
On Tue, 20 Sep 2022, Matthias Petermann wrote:I think I had answered this earlier (but I'm not quite sure) - the problem only occurs when I write the data obtained with "zfs send" to a local file system (e.g. the USB HDD). If I send the data to a remote system with netcat instead, the "file usage" remains within the green range.[...]This raises the question for me: can I somehow limit this kind of memory use on a process basis?Try piping to dd with direct I/O: ... | dd bs=10m oflag=direct of=foo.file Does that help?
In theory this looked exactly what I was looking for... unfortunately I can observe the same effect like I was doing before with the simple redirection. "File" grows to a maximum, the system starts to swap.
I use this command line: ```zfs send -R tank/vol@backup | dd bs=10m oflag=creat,direct of=/mnt/vhost-tank-vol.zfs
``` (I had to add the creat flag as the file did not exist before)The memory remains occupied even if I send a "sync" in between. However, it is immediately released again when I a) delete the file or b) unmount the file system.
Have I used the direct flag correctly? Kind regards Matthias