NetBSD-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Qemu storage performance drops when smp > 1 (NetBSD 9.3 + Qemu/nvmm + ZVOL)



Hi,

On 18.08.22 09:10, B. Atticus Grobe wrote:
ZFS (at least on Solaris and FreeBSD) will use any uncommitted RAM as an
I/O buffer, which likely explains why it was keeping up with the single
core runs, pushing everything to RAM instead of to disk. I would expect
if you push enough data to fill that buffer up, you'll see an equivalent
drop in write speed.


This does not seem to be the case at least for NetBSD 9.3. I just changed my test case so that all VMs use only one core each. I am now using ZVOLs again for storage. I achieve stable high throughputs, analogous to my first test. The free memory of the host (displayed with top) hardly decreases, i.e. of total 8192 MB more than 5000 MB are free and it also does not decrease significantly during the test runs. Mixed workloads also hardly worsen the performance, i.e. I ran in parallel in the three VMs:

vm1: dd if=/dev/zero of=test.img bs=4m
vm2: extraction of pkgsrc.tar.gz
vm3: pkgsrc build of lang/go118

For my "low end" system I draw these consequences for the time being:

 - each VM should use only one SMP core at a time
- ZVOLS are no problem and offer a better performance than QCOW2 on ZFS or FFS

It would be interesting to test a system with a much higher number of physical cores, e.g. start a VM with 2 virtual cores on an 8-core system - what this means for the I/O performance.... unfortunately i don't have such a system immediately available, but i could have a look in the attic. An old AMD FX-8350 that I discarded months ago to save energy could be suitable.

Kind regards
Matthias

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature



Home | Main Index | Thread Index | Old Index