Hello, On 21.05.23 16:01, Mathew, Cherry G.* wrote:
Hello, I'm wondering if there are any nvmm(4) users out there - I'd like to understand what your user experience is - expecially for multiple VMs running simultaneously. Specifically, I'd like to understand if nvmm based qemu VMs have interactive "jitter" or any such scheduling related effects. I tried 10.0_BETA with nvmm and it was unusable for 3guests that I migrated from XEN, so I had to fall back. Just looking for experiences from any users of nvmm(4) on NetBSD (any version, including -current is fine). Many Thanks,
I would like to contribute a small testimonial as well.I came across Qemu/NVMM more or less out of necessity, as I had been struggling for some time to set up a proper Xen configuration on newer NUCs (UEFI only). The issue I encountered was with the graphics output on the virtual host, meaning that the screen remained black after switching from Xen to NetBSD DOM0. Since the device I had at my disposal lacked a serial console or a management engine with Serial over LAN capabilities, I had to look for alternatives and therefore got somewhat involved in this topic.
I'm using the combination of NetBSD 9.3_STABLE + Qemu/NVMM on small low-end servers (Intel NUC7CJYHN), primarily for classic virtualization, which involves running multiple independent virtual servers on a physical server. The setup I have come up with works stably and with acceptable performance.
Scenario:I have a small root filesystem with FFS on the built-in SSD, and the backing store for the VMs is provided through ZFS ZVOLs. The ZVOLs are replicated alternately every night (full and incremental) to an external USB hard drive.
There are a total of 5 VMs: net (DHCP server, NFS and SMB server, DNS server) app (Apache/PHP-FPM/PostgreSQL hosting some low-traffic web apps) comm (ZNC)iot (Grafana, InfluxDB for data collection from two smart meters every 10 seconds)
mail (Postfix/Cyrus IMAP for a handful of mailboxes)Most of the time, the Hosts CPU usage of the host with this "load" is around 20%. The provided services consistently respond quickly.
However, I have noticed that depending on the load, the clocks of the VMs can deviate significantly. This can be compensated for by using a higher HZ in the host kernel (HZ=1000) and tolerant ntdps configuration in the guests. I have also tried various settings with schedctl, especially with the FIFO scheduler, which helped in certain scenarios with high I/O load. However, this came at the expense of stability.
Furthermore, in my system configuration, granting a guest more than one CPU core does not seem to provide any advantage. Particularly in the VMs where I am concerned about performance (net with Samba/NFS), my impression is that allocating more CPU cores actually decreases performance even further. I should measure this more precisely someday...
Apart from that, I have set up Speedstep on the host with a short warm-up and relatively long cool-down phase, so the system is very energy-efficient when idle (power consumption around 3.5W).
For the usability of keyboard/mouse/graphics output I cannot provide an assessment, as I install the guest systems using Qemu's virtual serial console.
If you have specific questions or need assistance, feel free to reach out. I have documented everything quite well, as I intended to contribute it to the wiki someday. By the way, I am currently working on a second identical system where I plan to test the combination of NetBSD 10.0_BETA and Xen 4.15.
Kind regards Matthias
Description: S/MIME Cryptographic Signature