tech-misc archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

ZFS volumes block size mismatch between -9 and -10?



Greetings,

I just completed a migration from -9 to -10 on a NAS I have running at home. Almost everything went well so far -- congratulations everyone, and especially releng@ :)

I have a curious issue though dealing with ZFS Volumes exposed to VMs as raw disks from the host, through virtio using nodes under /dev/zvol/rdsk/

There seems to be a *8 factor error in their size and access reported within the guests (and applies to all rdsk zvols); to illustrate:

host# # uname -a
NetBSD 10.0 (GENERIC) #0: Thu Mar 28 08:33:33 UTC 2024 mkrepro%mkrepro.NetBSD.org@localhost:/usr/src/sys/arch/amd64/compile/GENERIC amd64

host# zfs create -V 16G fileserver/t1

host# zfs get all fileserver/t1
NAME           PROPERTY              VALUE                  SOURCE
fileserver/t1  type                  volume                 -
fileserver/t1  creation              Tue Apr 23 21:27 2024  -
fileserver/t1  used                  16.5G                  -
fileserver/t1  available             1.19T                  -
fileserver/t1  referenced            34K                    -
fileserver/t1  compressratio         1.00x                  -
fileserver/t1  reservation           none                   default
fileserver/t1  volsize               16G                    local
fileserver/t1  volblocksize          8K                     -
[...]


host# /usr/pkg/bin/qemu-system-x86_64 -accel nvmm [...] \
-drive format=raw,file=/dev/zvol/rdsk/fileserver/t1,if=virtio,index=1

In guest:

guest# uname -a
Linux 5.10.0-21-amd64 #1 SMP Debian 5.10.162-1 (2023-01-21) x86_64 GNU/Linux

guest# fdisk -l
Disk /dev/vdc: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Latest qemu (compiled either under -9 or -10) lead to the same result.

That looks like a 512 vs 4k sector "change", however I would like to hear from others more knowledgeable than me in fs matters. FWIW here are logs for an actual 64G raw disk before and after the migration (logs extracted from guest):

avril 06 02:40:16 timemachine kernel: virtio_blk virtio2: [vdb] 134217728 512-byte logical blocks (68.7 GB/64.0 GiB)
=>
avril 23 22:09:35 timemachine kernel: virtio_blk virtio2: [vdb] 16777216 512-byte logical blocks (8.59 GB/8.00 GiB)

Cheers,

--
Jean-Yves Migeon
jym@



Home | Main Index | Thread Index | Old Index