Current-Users archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: File sharing over virtio-9p
On Tue, Mar 25, 2025 at 10:35 AM Greg A. Woods <woods%planix.ca@localhost> wrote:
>
> At Thu, 24 Oct 2019 13:32:59 +0900, Ryota Ozaki <ozaki.ryota%gmail.com@localhost> wrote:
> Subject: Re: File sharing over virtio-9p
> >
> > > A NetBSD guest can mount the exported directory with mount_9p.
> > >
> > > mount_9p -cu /dev/vio9p0 /mnt/9p
>
> So I finally got a chance to try this the other day after uncommenting
> the config in GENERIC:
>
> diff --git a/sys/arch/amd64/conf/GENERIC b/sys/arch/amd64/conf/GENERIC
> index b9d864680b40..ea0770936914 100644
> --- a/sys/arch/amd64/conf/GENERIC
> +++ b/sys/arch/amd64/conf/GENERIC
> @@ -1133,7 +1133,7 @@ viocon* at virtio? # Virtio serial device
> vioif* at virtio? # Virtio network device
> viornd* at virtio? # Virtio entropy device
> vioscsi* at virtio? # Virtio SCSI device
> -#vio9p* at virtio? # Virtio 9P device
> +vio9p* at virtio? # Virtio 9P device
>
> # Hyper-V devices
> vmbus* at acpi? # Hyper-V VMBus
>
>
> It seemed to work wonderfully and seemed a bit faster than a QEMU usb
> umass block device (e.g. emulated CD/ROM or emulated USB stick) in
> scenario I tried it in, which was macOS with UTM (and QEMU doing the
> I/O), and with the shared directory in an APFS filesystem on fast SSD.
>
> [ 1.053094] virtio4 at pci0 dev 8 function 0
> [ 1.053094] virtio4: 9P transport device (id 9, rev. 0x00)
> [ 1.053094] vio9p0 at virtio4: features: 0x10000001<INDIRECT_DESC,MOUNT_TAG>
> [ 1.053094] virtio4: allocated 24576 byte for virtqueue 0 for vio9p, size 128
> [ 1.053094] virtio4: using 16384 byte (1024 entries) indirect descriptors
> [ 1.053094] vio9p0: tagged as share
>
>
> I used the mount to do an upgrade of the VM from the release just built
> on macOS. (as an aside it didn't go as fast as I thought it should and
> 'systat vmstat' showed it wasn't keeping the devices fully occupied)
vio9p is not optimized for performance yet, for example, it can serve
each request
synchronously.
>
> However I did another reboot to get back to multi-user mode in the VM
> and I'm now greeted with the following error:
>
> # mount_9p -cu /dev/vio9p0 /9pfs/
> mount_9p: Rattach not received, got 107
>
> The kernel message in the new kernel look identical:
>
> [ 1.016612] virtio4 at pci0 dev 8 function 0
> [ 1.016612] virtio4: 9P transport device (id 9, rev. 0x00)
> [ 1.016612] vio9p0 at virtio4: features: 0x10000001<INDIRECT_DESC,MOUNT_TAG>
> [ 1.016612] virtio4: allocated 24576 byte for virtqueue 0 for vio9p, size 128
> [ 1.016612] virtio4: using 16384 byte (1024 entries) indirect descriptors
> [ 1.016612] vio9p0: tagged as share
>
>
> The only thing that it was a VM reboot, and maybe QEMU (which didn't
> restart) gets left in an unusable state?
>
> Trying a "cold" reboot now.....
>
> Ah ha, yes, that was the problem:
>
> # mount_9p -cu /dev/vio9p0 /9pfs
> # df
> Filesystem 1K-blocks Used Avail %Cap Mounted on
> /dev/dk1 27322780 6807586 19149056 26% /
> /dev/dk3 5082862 601362 4227358 12% /var
> /dev/dk5 20332078 62498 19252978 0% /home
> /dev/dk4 60996394 612850 57333726 1% /usr/pkg
> kernfs 1 1 0 100% /kern
> ptyfs 1 1 0 100% /dev/pts
> procfs 4 4 0 100% /proc
> tmpfs 2096140 4 2096136 0% /var/shm
> /dev/vio9p0 0 0 0 100% /9pfs
> # mount
> /dev/dk1 on / type ffs (log, local)
> /dev/dk3 on /var type ffs (log, local)
> /dev/dk5 on /home type ffs (log, local)
> /dev/dk4 on /usr/pkg type ffs (log, local)
> kernfs on /kern type kernfs (local)
> ptyfs on /dev/pts type ptyfs (local)
> procfs on /proc type procfs (local)
> tmpfs on /var/shm type tmpfs (local)
> /dev/vio9p0 on /9pfs type puffs|9p
>
>
> I wonder is there anything NetBSD, i.e. vio9p(4), can do to cleanly shut
> down the 9p connection and allow it to be reused without having to
> restart the host QEMU process? This would be nice as then a VM could
> reboot itself without having to use the host tools to restart.
umount /9pfs before rebooting might help you.
vio9p itself is just a bridge between a client and a server of 9p, so
it doesn't help for shutting down the 9p connection over it, I think.
ozaki-r
Home |
Main Index |
Thread Index |
Old Index