Hello Brian, On 17.08.22 20:51, Brian Buhrow wrote:
hello. If you want to use zfs for your storage, which I strongly recommend, lose the zvols and use flat files inside zfs itself. I think you'll find your storage performance goes up by orders of magnetude. I struggled with this on FreeBSD for over a year before I found the myriad of tickets on google regarding the terrible performance of zvols. It's a real shame, because zvols are such a tidy way to manage virtual servers. However, the performance penalty is just too big to ignore. -thanks -Brian
thank you for your suggestion. I have researched the ZVOL vs. QCOW2 discussion. Unfortunately, nothing can be found in connection with NetBSD, but some in connection with Linux and KVM. The things I have found attest to the ZVOLs at least a slight performance advantage. That people finally decide for QCOW2 seems to be mainly due to the fact that the VM can be paused when the underlying storage is overfilled instead of crashing like with ZVOL. However, this situation can also be prevented with monitoring and regular snapshots.
Nevertheless, I made a practical attempt and built my described test scenario exactly with QCOW2 files located in one and the same ZFS dataset. However, the result is almost the same.
If I give the Qemu processes only one core via parameter -smp, I can measure a very good I/O bandwidth on the host - depending on the number of running VMs it even increases significantly, so that the limiting factor here seems to be only the single-thread performance of a CPU core:
- VM 1 with 1 SMP Core: ~200 MByte/s - + VM2 with 1 SMP Core: ~300 MByte/s - + VM3 with 1 SMP Core: ~500 MByte/sAs with my first test, performance is dramatically worse when I give each VM 2 cores instead of 1:
- VM 1 with 2 SMP Cores: ~30...40 MByte/s - + VM2 with 2 SMP Cores: < 1 MByte/s - + VM3 with 2 SMP Cores: < 1 MByte/s Is there any logical explanation for this drastic drop in performance? Kind regards Matthias
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature