NetBSD-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Boot VM from a zvol using qemu and nvmm?



On Sat, 14 Sept 2024 at 08:57, Michael van Elst <mlelstv%serpens.de@localhost> wrote:
>
> riz%tastylime.net@localhost (Jeff Rizzo) writes:
>
> >data over is to zfs send the zvol.  However, a naive setting of
>
> >-drive file=/dev/zvol/rdsk/tank/volumes/my-zvol
>
> >in place of what (on my other VMs) is
>
> >-drive file=/tank/volumes/my-disk.qcow2
>
> >Doesn't seem to work, or at least qemu doesn't recognize it as having a
> >bootable image.
>
> >Should this work?
>
>
> On NetBSD it does not.

Sorry, I have to disagree. Each and every nvmm guest I have running
gets started like:

/usr/pkg/bin/qemu-system-x86_64 \
        -m 4096M \
        -k en-gb \
        -accel nvmm \
        -vnc :6 \
        -drive format=raw,file=/dev/zvol/rdsk/pail/omnios \
        -cdrom /mnt/ISOs/omnios-r151044k.iso \
        -net tap,fd=4 4<>/dev/tap1 \
        -net nic

or something like that, I have never had any problems. I have ran a
gaggle of clients - W10, Server 2019, *BSD, obviously OmniOS etc.
reasonably well, and they were actually usable - even over vnc using a
powerline adapter from the machine upstairs. I have since
decommissioned that box, as well as a few other desktops and replaced
them with a single mini-PC with the result that I no longer have
regularly booting NetBSD on real hardware and can't test nvmm though.

Notice the -drive format=raw bit - you do not have it in your invocation.
>
> The zvol:
>
> NAME           USED  AVAIL  REFER  MOUNTPOINT
> tank          4.13G  1.87T    88K  /tank
> tank/testvol  4.13G  1.88T    56K  -
>
> How qemu sees it:
>
> d1 (#block386): /dev/zvol/rdsk/tank/testvol (raw)
>     Attached to:      /machine/peripheral-anon/device[2]/virtio-backend
>     Cache mode:       writeback
>
> Images:
> image: /dev/zvol/rdsk/tank/testvol
> file format: raw
> virtual size: 512 MiB (536870912 bytes)
> disk size: 0 B
>
> How the guest sees it:
>
> [   1.0259062] ld1: 512 MB, 1040 cyl, 16 head, 63 sec, 512 bytes/sect x 1048576 sectors
>
>
> qemu has NetBSD support in upstream that uses DIOCGWEGEINFO and that
> returns:
>
> % dkctl /dev/zvol/rdsk/tank/testvol getwedgeinfo
> tank/testvol at ZFS:
> tank/testvol: 1048576 blocks at 0, type: ffs
>
>
> Our zvol code uses:
>
>                 dkw->dkw_size = dg->dg_secperunit;
>
> where this is derived from:
>
>                 dg->dg_secsize = secsize;
>                 dg->dg_secperunit = volsize / secsize;
>
> with
>
>         secsize = MAX(DEV_BSIZE, 1U << spa->spa_max_ashift);
>
> and volsize being the size of the disk in bytes.
>
> So ZFS and NetBSD think that the volume has 1M blocks of 4k size
> but qemu:
>
> static int64_t raw_getlength(BlockDriverState *bs)
> ...
>         if (ioctl(fd, DIOCGWEDGEINFO, &dkw) != -1) {
>             return dkw.dkw_size * 512;
>
>
> assumes that a wedge always reports size in 512 byte sectors
> and miscalculates the volume size.
>
>


-- 
----


Home | Main Index | Thread Index | Old Index